Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Text-Generation-Inference v1.0+ new license: HFOIL 1.0 #726

Closed
OlivierDehaene opened this issue Jul 28, 2023 · 35 comments
Closed

Text-Generation-Inference v1.0+ new license: HFOIL 1.0 #726

OlivierDehaene opened this issue Jul 28, 2023 · 35 comments

Comments

@OlivierDehaene
Copy link
Member

Text-Generation-Inference, aka TGI, is a project we started earlier this year to power optimized inference of Large Language Models, as an internal tool to power LLM inference on the Hugging Face Inference API and later Hugging Chat. Since then it has become a crucial component of our commercial products (like Inference Endpoints) and that of our commercial partners, like Amazon SageMaker, Azure Machine Learning and IBM watsonx. At the same time, the project quickly grew in popularity and was adopted by other open source projects like Open-Assistant and nat.dev.

TGI v1.0 new license: HFOIL 1.0

We are releasing TGI v1.0 under a new license: HFOIL 1.0.
All prior versions of TGI remain licensed under Apache 2.0, the last Apache 2.0 version being version 0.9.4.

HFOIL stands for Hugging Face Optimized Inference License, and it has been specifically designed for our optimized inference solutions. While the source code remains accessible, HFOIL is not a true open source license because we added a restriction: to sell a hosted or managed service built on top of TGI, we now require a separate agreement.
You can consult the new license here.

What does this mean for you?

This change in source code licensing has no impact on the overwhelming majority of our user community who use TGI for free. Additionally, both our Inference Endpoint customers and those of our commercial partners will also remain unaffected.

However, it will restrict non-partnered cloud service providers from offering TGI v1.0+ as a service without requesting a license.

To elaborate further:

  • If you are an existing user of TGI prior to v1.0, your current version is still Apache 2.0 and you can use it commercially without restrictions.

  • If you are using TGI for personal use or research purposes, the HFOIL 1.0 restrictions do not apply to you.

  • If you are using TGI for commercial purposes as part of an internal company project (that will not be sold to third parties as a hosted or managed service), the HFOIL 1.0 restrictions do not apply to you.

  • If you integrate TGI into a hosted or managed service that you sell to customers, then consider requesting a license to upgrade to v1.0 and later versions - you can email us at api-enterprise@huggingface.co with information about your service.

Why the new license?

TGI started as a project to power our internal products, and we see it as a critical component of our commercial solutions. TGI is not meant as a community-driven project, but as a production solution that’s widely accessible to the community. We want to continue building TGI in the open, and will continue to welcome contributions. But unlike community-driven projects like Transformers and Diffusers focused on making machine learning accessible, TGI is focused on performance and robustness in production contexts, with the goal of building commercial products.

What about Hugging Face contributions to open source?

Our mission as a company is to democratize good machine learning. An important component of democratization is making good machine learning more accessible. We achieve this through community-driven open source projects like Transformers, Diffusers, Datasets, our free courses (Transformers, Diffusers Audio, RL), and many more libraries collectively garnering about 240k GitHub stars as of this writing. Our long term commitment to open source has not changed.

@dongs0104
Copy link
Contributor

Hello is it similar license as Elastic License v2 at ElasticSearch?

@julien-c
Copy link
Member

@dongs0104 correct

@Atry
Copy link
Contributor

Atry commented Jul 28, 2023

TGI depends vLLM, which is licensed under Apache. What if vLLM changes their license for the same reason and forbid TGI from using vLLM?

@OlivierDehaene
Copy link
Member Author

TGI depends on our fork of VLLM which is Apache 2.0.

@shibanovp
Copy link

Transitioning to HFOIL 1.0 is understandable, but please consider introducing both free/growth and enterprise plans. It would be great if I could pass an access token to args, and have it function as a license key. I believe you could collect this and this for telemetry.

Here are the benefits I foresee:

  1. Your sales team wouldn't have to spend time discussing "We are innovative ... with 0.5 active monthly users", and could instead focus on taliking with qualified prospects.
  2. Startups with pre-configured billing could effortlessly use TGI.

@OlivierDehaene
Copy link
Member Author

OlivierDehaene commented Jul 29, 2023

@shibanovp I think you are misunderstanding the terms of the license.

You can continue using TGI for free without ever interacting with our sales team if the service your are building or selling is not a container serving engine tighly integrated with TGI (for example a serverless TGI experience where user can pick a hub model and it automatically starts a TGI container for them like we offer in Inference Endpoint).

Then AND ONLY THEN would you need to contact us to get a license if AND ONLY IF you want to use TGI 1.0+.

To summarize:

  • building and selling a chat app for example that uses TGI as a backend is ok whatever the version you use
  • building and selling a Inference Endpoint like experience using TGI 1.0+ requires an agreement with HF

@rom1504
Copy link

rom1504 commented Jul 29, 2023

I think if you write in the readme that is not opensource software then contributors will not be confused.

@OlivierDehaene
Copy link
Member Author

@rom1504 all contributors that we could contact received this information days in advance.

We warn new contributors about the license change directly in their PR when reviewing them (like in #617) and we might add a proper Contributor License Agreement in the future.

Some contributors decided to close their PR given the new license (#529 #719).

What type of language would you like to see in the README?

@shibanovp
Copy link

@OlivierDehaene
Thank you for the clarification. I do have a question, however. Can the self-service I'm selling allow clients to select a fine-tuned model (based on their private data), and create / destroy it with Terraform on Kubernetes in either a public cloud or on-premise that includes TGI as part of the stack?

@OlivierDehaene
Copy link
Member Author

@shibanovp I do not think I am the best interlocutor to answer such a precise question.

Are you ok with contacting us at api-enterprise@huggingface.co to loop in our legal team?

@shibanovp
Copy link

Are you ok with contacting us at api-enterprise@huggingface.co to loop in our legal team?

I thought as much...

This is what I was referring to in #726 (comment)

The workflow I would prefer is as follows:

  1. I add a credit card.
  2. I pass an access token to args.
  3. all works

However, the current workflow is:

  1. I have to prepare for the v1.0 migration.
  2. I must contact your team.
  3. I follow X number of steps.
  4. all works

@rom1504
Copy link

rom1504 commented Jul 29, 2023

@OlivierDehaene "This tool is not opensource. We provide open access via license XYZ"

@camenduru
Copy link

Hi @julien-c 👋 Hi @OlivierDehaene 👋 Bad News 😭 Got it! You're trying to sell to big companies, but that might harm small startups. Please add a volume-based license. It will apply to the big companies making significant profits, while still supporting smaller businesses.

@OlivierDehaene
Copy link
Member Author

@shibanovp now that you explained your use case, this kind of telemetry is really interesting and an integration that we will consider for sure!

@shibanovp
Copy link

@camenduru
They aren't trying to sell to big firms. They're stopping them from profiting off TGI while HF pays for development. HF could explain this better, like CockroachDB did before.

Startups taking a severe hit is just collateral damage. I don't believe the intention was to penalize everyone who is not using this feature.
image

@OlivierDehaene
Thank you!

This makes things complex for teams without legal experts. NDAs must change since I have to share my client's work with you. Startups might suffer as they stick to v0.9.4 while you continue to rapidly roll out new features and fixes, as you always do.

@Atry
Copy link
Contributor

Atry commented Jul 29, 2023

A big thank you to @OlivierDehaene and @Narsil for creating this awesome project. I understand HuggingFace's motivation to choose to close-source. I was the maintainer of HHVM's OSS version when I was working for Meta. We were in a similar situation where the only notable external user of HHVM is one of Meta's competitors. Meta finally removed my team and eliminated the most of OSS support workload for HHVM.

Currently I am working for Preemo. We use text-generation-inference internally in a way that might be incompatible with the new license. So we decide to create an open-source fork of text-generation-inference.

https://github.com/Preemo-Inc/text-generation-inference

I am pretty familiar with the code base, and we will actively maintain it given that we are internally using it.

Forking text-generation-inference is not an all bad thing.

  1. We are not satisfied with the current Makefile + pip + poetry + Dockerfile build. I anticipant we will create some patches to improve it, making it be unified and more reproducible and easy to deploy.
  2. Currently HuggingFace does not review pull requests timely.

I plan to redirect my unreviewed pull requests to the new repository and I anticipant we will export other Preemo's internal changes to public pull requests in next couple of weeks.

Currently text-generation-inference's CI depends on HuggingFace's private infrastructure and it surely would not work in the new repository. We plan to fix the CI in next couple of weeks.

Welcome to contribute to the open source fork of text-generation-inference!

@OlivierDehaene
Copy link
Member Author

@Atry the reason we were not reviewing/merging your proposals is because we have been discussing this license change internally for a long time and we felt that having new contributors would make this process harder.

You are more than welcome to fork TGI; a lot of great project (Grafana comes to mind for example) started this way.

Thank you for your interest in TGI in the first place and the kind words.

@flozi00
Copy link
Contributor

flozi00 commented Jul 30, 2023

Since there is a lot of critic about that steps maybe you also want to hear some positive feedback.
I can totally understand this step, as far I understand it's only restriction is hurting the cloud providers by just install and sell this project on own servers.

Maybe it's just not clear enough explained, what do you think about some examples in real world cases ?
It would be very sad if this would hurt the project development since I still think open source is more powerful than closed source, and this license is still very open for most use cases.
To clarify I would be open for writing such uses cases that you could publish for more clarification

@andreapiso
Copy link

TGI depends vLLM, which is licensed under Apache. What if vLLM changes their license for the same reason and forbid TGI from using vLLM?

I mean, HuggingFace was entirely built on open source. Imagine if pytorch came out one day with "you can still use pytorch for commercial purposes, but if you are planning to just wrap some pytorch functions to build a framework and a commercial service based on it, you need to get a license".

@weiZhenkun
Copy link

weiZhenkun commented Jul 31, 2023

Hi @OlivierDehaene ,
We are using the 763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-tgi-inference:2.0.0-tgi0.8.2-gpu-py39-cu118-ubuntu20.04 which privided form https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-text-generation-inference-containers to deploying our own LLM model on the aws now.

Questions:

1. Can this project be upgraded to v1.0+, is it restricted by this license?

2. If Q1 is not restricted, can we use this image with TGI 1.0+ to deploy our own model on AWS ? is it restricted by this license?

@Narsil
Copy link
Collaborator

Narsil commented Aug 2, 2023

@flozi00 We have created a FAQ which hopefully is clearer: #744 Do you think it's more understandable.

As stated in the FAQ our biggest mistake I think is not making the change sooner or making our intent more public sooner. But we didn't have the license yet, and trying to figure out how we could keep the code open and protect our business at the same time took time.

@flozi00
Copy link
Contributor

flozi00 commented Aug 2, 2023

Yeah, looks pretty good now
I really like that way to be open and accessible with protection.
Is the license usable for other companies or can it only protect hugginface projects ?
Keep fighting for open source 👍

@jcrupi
Copy link

jcrupi commented Aug 2, 2023

Maybe a better option was to keep it Apache 2.0 and create an enterprise version on top which you provide commercial license. Like Redhat and companies who over enterprise value on open source. Now, in this new model, why would a startup use the opensource when they are building a product to know that it has an unknown commercial license.

@yixu34
Copy link

yixu34 commented Aug 3, 2023

We at Scale have decided to create a fork of text-generation-inference from 0.9.4. We're calling it OpenTGI: https://github.com/scaleapi/open-tgi.

This is a dependency we use inside of LLM Engine, which is our open source framework for serving and fine-tuning open source LLMs on your own infrastructure.

We're committing to keeping both Apache 2 licensed.

@Narsil
Copy link
Collaborator

Narsil commented Aug 3, 2023

@flozi00 The license isn't specific in its term to Hugging Face.

@jcrupi This has been discussed internally, however doing that would mean keeping the best of our features reserved for commercial usage, not putting them in open source and creating an internal fork. We could even have made it closed source and use only that for our own offerings.

However we felt that this wasn't really helping the community at large.
Currently we can keep everything in a single location, visible by all, and keep pushing performance features as much as we want without asking ourselves on which license it should belong to.

0.9.4 is still available for all without any restrictions

@NikolaBorisov
Copy link

Given this license change we also created a fork of text-generation-inference here:
https://github.com/deepinfra/text-generation-inference

We are using TGI in our inference service. We will keep this fork under Apache 2.0

@anttttti
Copy link

@camenduru They aren't trying to sell to big firms. They're stopping them from profiting off TGI while HF pays for development. HF could explain this better, like CockroachDB did before.

The OS community was converging on TGI, since unlike competitors it provides all the new algorithmic innovations in one bundle: tensor sharding, quantization, continuous batching, etc. It could have become the standard library for deployment if it already wasn't. It wasn't "HF pays for development", but the very opposite.

Startups taking a severe hit is just collateral damage. I don't believe the intention was to penalize everyone who is not using this feature.

If this was the case, HF could have made a condition based on the size of the company. Just copy the Llama 2 clause 2."...greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta...". Even better, make a general license based on this that could be adopted industry-wide.

As is, HF is expecting to get free OS contributions, while at the same time getting their revenue from OS startups. This should have been thought out better, this doesn't benefit Hugging Face or the OS community.

@sri-dhurkesh
Copy link

Hi , We are currently using tgi-1.0.0 as backend to build a chat bot to our clients. Can we need to inform about this to your sales team.

@hmcp22
Copy link

hmcp22 commented Aug 23, 2023

Hi, I work for a document AI company, where we want to use TGI to serve llm's for a number of use cases related to document data extraction, chat with your docs, document summarization and other document related use cases. We have no plans to provide clients with direct access to the llm's but instead these would sit behind an application layer designed to incorporate the power of llms for predefined use cases. Would we need to request a license for this purpose?

@kiratp
Copy link

kiratp commented Aug 31, 2023

HFOIL is not a true open source license because we added a restriction: to sell a hosted or managed service built on top of TGI, we now require a separate agreement.

Practically anything built on any cloud today and made available to users in a browser as “a hosted or managed service”.

I do not begrudge HF from trying to prevent managed service competitors using their own code BUT there is enough ambiguity in the actual licensing language that it would be irresponsible for any SaaS vendor to use this without a license from HF.

@abdullahsych
Copy link

Does this license also restrict commercial usage of AWS DLCs with TGI v >= 1.0?

https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-text-generation-inference-containers

@jeffboudier
Copy link
Member

@abdullahsych no, your usage of TGI v1.0+ within the HuggingFace Text Generation Inference Containers DLCs on SageMaker is not restricted by the HFOIL 1.0 license, thanks to the agreement between Hugging Face and Amazon.

@orellavie1212
Copy link

orellavie1212 commented Dec 26, 2023

@abdullahsych no, your usage of TGI v1.0+ within the HuggingFace Text Generation Inference Containers DLCs on SageMaker is not restricted by the HFOIL 1.0 license, thanks to the agreement between Hugging Face and Amazon.

which cloud providers besides aws this comment includes? also for example azure?

@kno10
Copy link

kno10 commented Jan 16, 2024

Has this plan ever worked on a long run?

  • MySQL -> MariaDB
  • Elastic -> OpenSearch

At some point, some of the developers will leave huggingface and regret this license change, then begin contributing to some open-source fork.

You may not distribute the Software as a hosted or managed, and paid service, where the
service grants users access to any substantial set of the features or functionality of the
Software. If you wish to do so, You will need to be granted additional rights from the
Licensor which will be subject to a separate mutually agreed agreement.

As "substantial" is not well-defined, you have to read this as You may not distribute the Software as a hosted or managed, and paid service.

I've been experimenting with VLLM and TGI somewhat in parallel. I will focus on VLLM now.

@OlivierDehaene
Copy link
Member Author

OlivierDehaene commented Apr 8, 2024

I am very happy to announce that the license was reverted to Apache 2.0.

This concerns both TGI and the Text Embeddings Inference repository.

We reflected a lot on the change to HFoil since July. We understand how alienating this change must have felt for our users and we are sorry.

At the time we felt that this change was necessary to safeguard our inference solutions from larger companies. A lot of things have changed in the past year and the reasons we made the change in the first place are not applicable today.

This decision is irreversible; the repository will remain under the Apache 2.0 license for all forthcoming releases.

Myself and the team are super exicted by this change and for the future of HuggingFace inference solutions :)

@OlivierDehaene OlivierDehaene unpinned this issue Apr 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests