Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hiatus explaination (originally "GreasyFork userscript entry gone") #173

Open
pointydev opened this issue May 7, 2024 · 20 comments
Open

Comments

@pointydev
Copy link

pointydev commented May 7, 2024

Plain and simple, the userscript is no longer available on GreasyFork (returning 404). Personally, I have ratelimit issues with OpenUserJS due to my IP, could we get a fully compiled version on GitHub (perhaps in a separate repository or using GitHub releases)?

Thanks,
pointy


Thank you for all your hard work @raingart, I wish you well with whatever you decide to do in the future.

@raingart
Copy link
Owner

raingart commented May 7, 2024

I think it's worth explaining. I thought that the script would go missing without anyone noticing for at least a few weeks or never at all.

I think I won't update anything for a couple of months. So you don’t have to worry about missing something. Maybe sometime in the future when my ass has cooled down and I’m ready to serve ungrateful upstarts again.

By the way, it’s not difficult to assemble the script yourself. I made a .bat collector that does not require installing third-party libraries like Python. Well, this is for the future for those who read this.

Yes, some critics began to complain about the script and about me personally. It drove me crazy. Because greasyfork does not have an access restriction function.
And then it dawned on me - why am I wasting my time and energy so that all sorts of ungrateful bastards pester me?

I had several parsers and bots for data processing and administration. And they had a monthly service fee for clients. And you know what, I’ve never been rude to anyone. All requests received were competent.

In my free time, I wrote software for smartphones and browser extensions, and there will always be dissatisfied people. Amazing impudence, it is true what they say - “a people not value the things they get for free” They believe that they have rights and I should listen to their nonsense. After all, I have nothing else to do.

Or the latest news, I managed to write a webhook to intercept and modify data from youtube. If this is combined with the Youtube data search module, which has been present for a long time, it is possible to replace YouTube data on the fly.
For example, integration of advertising or subtitles. There was just a working prototype of dual subtitles. Like translation into any language using YouTube and displaying the original and translation in parallel. Or transfer the transcript to chatGPT to get a brief summary of the video or output. But the problem with the keys and youtube turned out to be not so simple and began to block my access to some data.

But I'm not interested anymore. Why waste your free time on this? I don't need this functionality. Does anyone need it? - I don't know. Even now I have stopped periodically monitoring the presence of new scripts with interesting functions. I actually became freer.

I think this time will show how successful Nova was. If it breaks without me, then it’s time for the emu to go into the trash bin of history.

I haven't mentioned the endless spam yet. With the constant imposition of spyware or adware modules that they offer me to integrate into products. Or a ransom.
Why am I even doing this? Like everyone else, you need to rent a server for 20usd and sell and sell a subscription to your VPN. And at the same time leak information to long list spammers to optimize sales:

Screenshot_1

By the way, do you know what the biggest complaint was when I posted the webstore? "the script does not work in my mobile browser in the mobile version of youtube." Why the hell should he work? Did I mention this somewhere? This knocked the ground out from under my feet so much that I didn’t even know how to respond. They think that if their stupid browsers cromite, bromite, via, kiwi and other junk have a “userscript support” tab, it should work.

Thank you @pointydev and many other caring users. I tried for you all. For me personally, most of the functionality is unclaimed. My script is not the center of youtube solutions. There are other far more promising projects. I believe in them. Perhaps their hands will be stronger than mine

Thanks all,
raingart

@misspent
Copy link

misspent commented May 7, 2024

Shame to see it disappear, and you've got to hate it when some entitled people ruin it for the rest of us. I guess we'll have to find an alternative that can come close, which is a doubt, but you never know. You deserve a long "break" and if you don't come back then adios and take care.

@jhiajhainjhon
Copy link

jhiajhainjhon commented May 7, 2024

sad to see that you had to go through those things.
just want to let you know that there's people that's grateful for your work like me.
honestly i think nova save youtube personally, i was so tired of youtube and hesitated before clicking on any youtube link.
but i'm glad to know that you want to treat yourself better, because you definitely worth that in my opinion.
this “a people not value the things they get for free” thing hits me so hard, it reminds me of how some of my friends had treated me, but i feel so much better a couple of months after i left them.
i hope youtube will have you in their team, or other companies that value your work.
may the odds be ever in your favor.
螢幕擷取畫面 2024-05-08 071151

@pointydev pointydev changed the title GreasyFork userscript entry gone Hiatus explaination (originally "GreasyFork userscript entry gone") May 7, 2024
@KamelittaOida
Copy link

KamelittaOida commented May 11, 2024

omfg -.- just now tried to update the script in another profile then visited greasyfork.

well so it goes.

all the best. thanks for all the fish

@Eisys
Copy link

Eisys commented May 12, 2024

Another one bites the dust. I started over 10 years ago with Youtube Center, then Youtube Plus which turned into Iridium and now I'm on Nova.

This is such incredibly sad news. I can't live without:

  • Pin player while scrolling (No other extension does it as well as Nova)
  • Player shortcuts always active
  • Default tab on channel page

And just wildly useful feature that make Youtube more tolerable:

  • Fix channel links in sidebar (I've gotten really used to this)
  • Date format display ("1 year ago" is NOT useful or accurate enough, Youtube..)
  • Disable scroll to top on click timestamps

Of course this is a small selection of all the wonderful things this script can do.

PS: As a small bit of 'criticism', I think perhaps you've let Nova get too big. This is what happened to Youtube Center. Yeppha was constantly taking people's requests for tiny, niche features and implementing them, then getting overwhelmed and I feel like that's been the case for Nova as well.
It's okay to say no to adding a feature.
Of course if it's not too difficult to add and doesn't hinder other feature, it's not a big deal, and extra customization is awesome.
But at some point you're having to deal with too many feature and bug reports on them, etc. It can get overwhelming.
And yeah, it sucks when people don't appreciate a free thing. A lot of people also simply don't realise how much work something like this is. They get a bug, file a report and want it fixed in 3 seconds. And when that doesn't happen, they get angry.
Fuck people like that.. Try to look past asshole users and concentrate more on the positive feedback.

In any case, Thank You very much for all the work you've done on Nova!! I hope you come back to it at some point.

@zkisaboss
Copy link

This script is amazing! It's disappointing that you won't be maintaining it for the near future, but I want to express my gratitude. Your work has really made my YouTube experience exceptional, I hope you return someday. Take care!

@raingart
Copy link
Owner

raingart commented Aug 6, 2024

Hey guys!
Yep, I'm not dead yet.

I don’t know when release a new Nova ver. But here's an intermediate one. But there is no particular meaning to its installation. Because it does not expand the functionality, but deepens the internal mechanisms. That is, optimization and load reduction. That is, there is practically nothing new there. But some things are “damaged” because they are simply unfinished.

dev ver (info #182)

In recent months, I have put all my energy into local LLMs. I have a lot to say. From initial surprise and delight to disappointment and humiliation.

LLM has its own github analogue (repository) - https://huggingface.co/

and package manager (analogous to git) - https://ollama.com/
but there are GUI clients

the listed GUIs are written in Electron. The only difference is in UX

of those that I recommend for code is:

About Nova. In truth, that conflict was just the last straw. I began to feel that I had reached the limit of the current capabilities of the script. The current functionality is superior to what I actually use. Of course, there is always the possibility of development (for example, using shadow DOM or intercepting and changing data sending yt framework) but they are disproportionate to the resources expended. For the next high-quality development, global changes in the architecture and operating principles of the script are necessary. But I don't want to do this now. And I became really interested in how other users would solve these problems.
So my break continues. Initially, the script appeared only because I thought I could do it the same or better than others. But now I don’t have anyone to compete with and it’s more uninteresting. A possible next branch of development would be to work with LLM and they would push me or someone else to better ideas for changes.
Perhaps by the end of the year I will be able to transfer the script to use manifest v3. It’s not difficult, but what causes a storm inside me is that someone (Google with its ultra secure and "user friendly" manifest and service worker) is trying to impose on me worse than what I have now. This is where my stubbornness begins to work.

p.s. Thank you all for your support. If I return, it will be thanks to all of you.

@pointydev
Copy link
Author

Thanks for the update on how things are going @raingart, glad to see you've been keeping yourself busy. I'll check out the dev build if I end up running into the TrustedHTML error (haven't so far). Appreciate you. <3

@KamelittaOida
Copy link

yoooo buddy. tysm. hope ur doing well.

so far the few things I need from Nova still work 😅

but what causes a storm inside me is that someone is trying to impose something on me that’s worse than what happened. This is where my stubbornness begins to work.

you need to step away a little bit, figuratively, and dont take everything so serious.
One example I can give is Freetube. (funny enough it's youtube related and on github too). I gave a suggestion and it was turned down because "there are already too many options". ... oh well

I understand your viewpoint but there are allot of users creating issues for things like this. If we would implement them all FT would have allot of settings on top of the already large setting page which is in itself an issue. Thats why FreeTubeApp/FreeTube#251 is the only good solution for this.

FreeTubeApp/FreeTube#4989

I'm a pretty altruistic person, but if I had an opensource project and maybe the donations wouldn't do so well in comparison to the bitching, I really don't know how long I would keep at it.
unless of course the project is something that I needed and it being public is just a side effect. which AFAIK is how many opensource project start.

@zkisaboss
Copy link

Thanks for the update @raingart. It's great to hear that you're exploring new opportunities and we're looking forward to seeing what the future holds for you and Nova. Take care and thanks for keeping in touch! 👊

@markran
Copy link

markran commented Aug 27, 2024

I really love Nova. It's just so great and nothing else works nearly as well. Very sad to hear that all the typical, stupid internet shit finally got to you. Certainly understandable, but still such a tragic loss. Always remember that what you created is a remarkable achievement. No low-life spammer or 13 year-old griefer can ever take that away from you.

I truly hope someday you'll return.

@raingart
Copy link
Owner

raingart commented Dec 9, 2024

Hi guys!

check the beta ver

I'm back from the "the Warp"! I wanted to say a lot, but I'm sitting here not knowing what to write. Perhaps it was very difficult for me to return to Nova. The script is incredibly huge! Let's start in order.

A lot of text about LLMs. I understood that I was carried away there, and I moved it to the end of the message.

I only managed to put together a working version of Nova recently. Before that, many parts were "disassembled." Initially, I wanted to review all plugins and rewrite them, but I ran out of strength. I only partially resumed work on them. Unfortunately, LLMs were useless for this. There were many things in the external mechanisms that were optimized, which really took as much time as writing them from scratch. Many things were written at different times, and there was no "unity" among them.

I initially kept a changelog, but at some point, I stopped because the fixes affected almost all plugins.

As users, you may not notice any changes except for the fact that the settings page URL has changed. This is related to the fact that the data synchronization method has changed and is not compatible with the old one. This has reduced situations where plugins seemed to not load and the page was empty. If you've had such issues.

Also, a lot of work has been done on optimization. Not that the results are immediately noticeable, but theoretically, YouTube should now work with Nova both with and without it at the same speed.

I wanted to prepare a 'gift' for you all by Christmas, but this is all I managed to do. Nothing new has been added except for one plugin that, in essence, is part of another. I was just about to be overwhelmed by this mess when you sent your request.

Here's the list of new features, excluding fixes:

- add `Apply if URL has link to comment` option for [video-autopause]
- add `Apply if URL has link to comment` option for [video-autostop]
- add `Custom colors` option for [sponsor-block]
- add `Auto-hide` option for [player-quick-buttons]
- add `Add screenshot subtitle` option for [player-quick-buttons]
- add `Get screenshot size from` option for [player-quick-buttons]
- merge `clipboard` and `formats` in Screenshot out [player-quick-buttons]
- add `popup` in Screenshot out [player-quick-buttons]
- add `Min iframe width` option for [embed-popup]
- add `speed` option for [time-remaining] by @muescha
- cut from (thumbs-title-normalize) and move `Show full title` in new plugin for [thumbs-title-show-full]
- add `No gradient` option for [player-progress-bar-color]
- remove `Show info at start chapter` and `Chapter timeout` in [player-indicator]

new plugin
- [subtitle] - `Custom subtitle`

But don't think that I just gave up on Nova; it was so difficult to work on it that I created several libraries and extensions. And I swear to you that I didn't spend more than two days on each one.

There is another extension that works, but I’m too lazy to add dragging the position of elements and moving between tabs.

Screenshot from 2024-12-09 19-37-07

By the way, I tried using whisper.cpp to generate subtitles for videos. It turned out that the quality of subtitles is no better than what YouTube itself generates, only the generation speed is slower

Basically, the beta ver is kind of working, lol. Write to me if you want to fix anything or add new features. I'll try my best. I haven't looked at what's new for YouTube in over half a year. Maybe there are already better scripts or extensions out there. Or parts that have "stolen" from me, for example, I just found out who took the music identification module from me. Of course, without mentioning the author. So there's no need to lie, because besides me, no one else is that idiotic method.

Ask: When will there be a normal release?

Answer: It's going to be "sometime."

LLM

What am I talking about and how does it work - https://www.youtube.com/watch?v=UZDiGooFs54

For the first two months, I just left it alone, I had no motivation. Then, at the end of summer, I finally took matters into my own hands and started to make something happen. Ultimately, I disassembled the script into parts and began experimenting with a local, less sophisticated Large Language Model (LLM) for refactoring code with the directive 'refactor'. The result initially seemed very good to me; the code became understandable and more organized. However, this was a mistake. There were both logical and practical errors. I sent entire plugins in hope that they would understand and grasp the logic of the separate logical parts. Oh, it took me probably another two months to fix what they missed and to reconsider my methods.

However, this was excellent testing for LLMs. I tested all LLMs up to 25 billion parameters, and many of them simply discarded parts of the code. This affected even larger LLMs like command-r and gemini. GPT-4 also didn't impress me. Out of about 30 LLMs I tested in practice, only around 5 turned out to be relatively decent, not destroying parts of the code or doing so minimally. At least they're not complete garbage. Moreover, I tested only on code, RP, RAG; RP is another story. If you see that a model is strong in prose and writing, that's usually just hype and it's garbage good only for trivial texts. Also, I note that fine-tuning all models doesn't surpass stock models; in fact, most of them are worse. It's like alchemy where you randomly mix things in the hope of getting gold, but end up with chimera-like abominations at best. I don't have resources, knowledge, or even datasets. It's amusing when they "educate" models with 100-10k of their examples/books. This is such a tiny fraction of the data that it's negligible in terms of error margin.

Progress has been made in image generation. The results are much more tangible. What's more, I've even quantified several Stable Diffusion models. LoRA is indeed a simple and effective fine-tune. But FLUX is a major breakthrough that approaches closed models like MidJourney & Dall-E. MoE (Mixture of Experts) for personal or local use is garbage; it's purely a service-oriented LLM suitable for hosting, capable of quickly and efficiently solving various tasks for different users. Overall, they're worse in every way except speed. They're easily distinguishable. Either their name explicitly states it or it's indicated by an abbreviation like 8x20B, where '8' indicates the number of 'experts'.

As for model quantization or compression, GGUF is an uncontested winner with a significant lead. It's not the best, but it's the most accessible. If you're downloading, choose q5_k_m or worse, q4_k_m. Lower quantities are suitable for prose but not higher. Higher quantities, like q8 or F16, are generally not optimal, but if memory allows, why not? Overall, q6 is more than sufficient.

As for running models, use the simplest like LM Studio or koboldcpp for starting out and getting familiar. If you don't want to delve into it.

As for which models to use, definitely go for the 7B models. They're the minimum that's practically useful. Anything below I consider garbage. It's better not to waste time on them and simply use models via API, which are almost always free.

Regarding uncensored models (abliterated), if you delve into the details, they're the equivalent of a human who has undergone a lobotomy, been treated in psychiatry, and been released. At best, the model will be a little duller than the original.

As for 1-quantum bits, it's a clever way to elevate a highly quantized model to q4. It's better than ordinary q2-q3 quantizations, but it's still garbage. It might be suitable if you're determined to run a large model.

Where to download models? On the Hugging Face website, you can find them at bartowski or QuantFactory for older models if you're into antiquity. But I'd check with the publishers themselves.

That's basically all I have to say about my advice on LLMs; it's only a small part. But English is a priority. No, the response won't be faster and optimal; with other languages, the tokenizer may not be very compatible, but that's not an axiom.

And here's a list of recommended models that I might suggest:

  • Claude: not local but an uncontested leader, especially for code. There's nothing better for the near future.
  • Qwen2.5: unlike the garbage series 1.5, this is a satisfying series. Reddit's choice. But don't listen to these fanatics who promote it everywhere for a patch of rice.
  • Mistral-Nemo, Mistral-Small: not bad models that don't mix Chinese and English words.
  • WizardLM-2: a decent all-around model. I recommend it as a starting point. It's simply not bad, I suggest it.
  • codegeex4-all-9b: a dumb model in general, but it's actually decent for code refactoring. I wouldn't recommend it as a main model, but when you need to "reorganize code" that's not very organized.

And there are different types of the same model. Always choose either the Instruct or chat version. Never base.

Here are some soft LLMs:

What's possible right now? Generating music is very popular right now. Generating video frames is not yet very successful for people. Animating static images is easier than creating from scratch.
https://www.youtube.com/watch?v=Ddpx0JLOH6o

@KamelittaOida
Copy link

KamelittaOida commented Dec 15, 2024

wow great you're still working on this. at some point you told me "there's always Iridium".
bro. I tired iridium, it has like 10 options and iirc none i would use :O 😭
(this is not a request. (still need to update from link above)) I wonder if you came back bc yt fucked up something again aka for me 'Thumbnails count in row' like 3 weeks ago.

The script is incredibly huge! Let's start in order.

who are you telling lol

great rundown on the ai stuff 👍

@raingart
Copy link
Owner

@KamelittaOida

who are you telling lol

I also found it funny when, a few months later, I opened the code editor and the number of lines in the plugins was written below: 1200, ok... next plugins: 800, 900, 4000 - why the hell is there so much?
Well, I think now I'll write a bit of code a bit more compactly. As a result, git diff writes +50 new lines in the file - damn it!

It's interesting that, while I'm listening to a podcast by Richard Feynman from 1985, I had a flashback of what I'm observing right now with transformer models

@KamelittaOida
Copy link

KamelittaOida commented Dec 15, 2024

was never the biggest fan of Feynman but I liked his "ice is slippery" interview. but whatever gets the gears going.
recently got this recommended by yt https://www.youtube.com/watch?v=TwKpj2ISQAc - the sham legacy of Richard Feynman - Angela Collier - Nov 2024

@raingart
Copy link
Owner

I met another crazy person on reddit who claimed that imatrix quants are better than K-quanta. The funny thing is that just half a year ago those who chose i-quants were called fanatics. But as always, you can chat on the Internet without reading the instructions and without even conducting experiments. After all, these cretins are above reading some kind of manual,

The answer to the question: why use quantization (lossy compression) at all?
the problem is that VRAM is the most scarce and expensive resource for local use of models. Therefore, we have to try in every possible way to fit the largest model and the size of the context space.

here is the official wiki llama.cpp
https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix

comparison of i-quants vs k-quants
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

and the information I found about quantization in the form of graphs:

image_2024-10-03_00-29-54

photo_2024-08-19_14-36-36

and here is the graph I personally compiled based on many runs of my own synthetic questions:

losses

my personal observations and conclusions. Let me immediately mention that I am not a holder of a doctorate degree in LLM, and I could be wrong

IQ4_XS is not worse than Q4_K_M, and the difference is barely noticeable. However, I personally recommend using Q5_K_M. It's a fine line there where the quality is still not so bad that you'd notice it. Q6 is noticeably more noticeable, although not significantly so, in terms of slowdown.
This is particularly true for smaller models below 20B. The smaller the model, the more 'fragile' the line where a quantization error can distort the result. However, this can usually be bypassed by regenerating the answer.
Models above 30B are much larger and can maintain contextual continuity even at Q4 level, especially if you're using RP or something similar. They actually push the 'temp' value above 1 (reducing determinism) which, in essence, is already akin to quantization.

Also, there's a very crude but incredibly precise statement from an anonymous commenter: 'Shitty question - shitty answer.' The less accurately a question or prompt is composed, the worse the final answer will be. In the model, there's no crystal ball that reads your thoughts.

And if you're not sure about the answer (which is obviously the case since the principle of LLM is to generate text similar to an answer), it's imperative to check it in another model rather than starting a new conversation with the current model.

Regarding image generation models, if the model doesn't produce an image in one or two steps ("one-shot"), then any form of quantization is acceptable. However, the models are already quite compressed, and the differences in size between, for example, gguf q5 and q8 are practically negligible. Additionally, during image generation, it uses up the entire available VRAM with the resolution output. This means that the models are highly resource-intensive for the resolution they produce.

I hope this "wisdom" will be useful to you. After all, understanding this didn't require me to climb mountains to see old sages; I just had to sift through the trash of internet comments.

@pointydev
Copy link
Author

Just quickly jumping back to the original issue topic, is there any chance we could get a userscript build hosted on GitHub or GreasyFork (again)? I'm still having ratelimit issues with OpenUserJS. If you were to go down the GitHub hosted route, perhaps you could have a more "official" way to host the beta version so that more people could try it out (without having to dig for it through this issue lol). Additionally, the GreasyFork badge in the readme is broken, in case you hadn't noticed. Thanks again!

@raingart
Copy link
Owner

raingart commented Jan 1, 2025

@pointydev hi, happy New Year to you!
I haven't forgotten about you, and I remember your request.

Currently, there is a dev available (click on the Raw button):
https://gist.github.com/raingart/d734dd35f7090e9824df15f52c36f8a0
Please add a Raw button link as the update endpoint.

The situation is such that OpenUserJS, which was working fine years ago, started to output only a part of the script. When users updated, they ended up with a truncated portion of the script, which didn't work (#58 #issuecomment-1443198053).
As a result, I had to remove the script to prevent the spread of the mutilated version. This led to the loss of all likes and statistics. However, once everything was restored for the users, I republished the script, and for those who had it installed, everything updated properly.

By the way, this isn't the first time I've had to remove a script from GreasyFork due to similar issues (#6 #issuecomment-892596260). But back then, the reason was different. An author from India, whose script I had reported to the site's administration for injecting malicious code or ads for certain regions, started spamming me after that.

Therefore, I recommend finding scripts through this website: https://www.userscript.zone/

Btw, I recently corresponded with the author of "Enhancer for YouTube™" (https://chromewebstore.google.com/detail/enhancer-for-youtube/ponfpcnoihfmfllpaingbgckeeldkhle). There was a conflict between some of its plugins and his extention. But it seems like I've resolved that issue. The author has already translated many plugins into French and, according to analytics, 2 users from France will be happy with that. He also mentioned that he would try to implement some parts from Nova. So, my friends, you'll have another very potentially great alternative!

Guys, do you have any desires to implement or fix something, I can do it. But the basic things should work. From what I saw, only some things that were tied to the previous ver of the YT template were broken.

By the way, I only looked at greasyfork and the level of scripts there has increased significantly. It can be seen that many are written LLM. But the code is really not bad. And if what my friends tell me that they tested o3 from OpenAI is true, then this model can solve complex programming problems. Approaching the average level of a programmer. That is, a few requests will be enough to implement a lot of things.
So it's possible that many of us may soon find ourselves unemployed and end up working in coal mines. And each of you could fix many bugs in the Nova and LLM code and add more features in a shorter time than I have over the entire year.

@pointydev
Copy link
Author

Happy new year to you too!

Unfortunately I can't use the gist as the updateURL as the current dev version is 0.50.0b, which is technically behind the version on OpenUserJS 0.50.0.1. While I could overwrite the script manually, that would mean that all future updates would have to come from that gist for automatic updates to work. If you plan on updating that gist to the latest version of the code every update then that works for me!

If you do plan on uploading the dev version to that gist for each update, you may also want to instruct users to use https://gist.github.com/raingart/d734dd35f7090e9824df15f52c36f8a0/raw/nova-dev.user.js to install it instead of just clicking the Raw button, as the latter is pinned to the current gist revision (currently, the button leads to https://gist.github.com/raingart/d734dd35f7090e9824df15f52c36f8a0/raw/5b21b7d2d530ba4ee3b30dbd785d2aff455f2000/nova-dev.user.js).

I have noticed you updated your link in the readme installation instructions to userscript.zone, however they only link back to OpenUserJS which negates my issue of ratelimits. I also noticed that due to the search query used on that link, it may confuse users due to your other script listed there (which is now gone):
image

@raingart
Copy link
Owner

Guys, over the past month, I tested 5 'Reasoning' LLMs (Chain of Thought) and overall, it's a prank. They practically don't outperform regular LLMs in any practical tasks. The only exception is mathematics. Indeed, in such tasks, the advantage is obvious. But for example, in programming, it's complete garbage. They just 'consume' tokens in their response, comparable to a fast response without reasoning.

I thought I'd keep my findings to myself, in case it's just my results, but after seeing this post here and this image that perfectly embodies my experience

Image

So the panic about the coal mine is canceled for now...

==========

I think I'm already fed up with you LLM.
Therefore, I will send some alternatives to youtube clients for smartphones. I think almost everyone uses reVanced. But there are other quite interesting projects:

to those written in flutter, that is, potentially universal (compatible with iOS):

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants