-
-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hiatus explaination (originally "GreasyFork userscript entry gone") #173
Comments
I think it's worth explaining. I thought that the script would go missing without anyone noticing for at least a few weeks or never at all. I think I won't update anything for a couple of months. So you don’t have to worry about missing something. Maybe sometime in the future when my ass has cooled down and I’m ready to serve ungrateful upstarts again. By the way, it’s not difficult to assemble the script yourself. I made a Yes, some critics began to complain about the script and about me personally. It drove me crazy. Because greasyfork does not have an access restriction function. I had several parsers and bots for data processing and administration. And they had a monthly service fee for clients. And you know what, I’ve never been rude to anyone. All requests received were competent. In my free time, I wrote software for smartphones and browser extensions, and there will always be dissatisfied people. Amazing impudence, it is true what they say - “a people not value the things they get for free” They believe that they have rights and I should listen to their nonsense. After all, I have nothing else to do. Or the latest news, I managed to write a webhook to intercept and modify data from youtube. If this is combined with the Youtube data search module, which has been present for a long time, it is possible to replace YouTube data on the fly. But I'm not interested anymore. Why waste your free time on this? I don't need this functionality. Does anyone need it? - I don't know. Even now I have stopped periodically monitoring the presence of new scripts with interesting functions. I actually became freer. I think this time will show how successful Nova was. If it breaks without me, then it’s time for the emu to go into the trash bin of history. I haven't mentioned the endless spam yet. With the constant imposition of spyware or adware modules that they offer me to integrate into products. Or a ransom. By the way, do you know what the biggest complaint was when I posted the webstore? "the script does not work in my mobile browser in the mobile version of youtube." Why the hell should he work? Did I mention this somewhere? This knocked the ground out from under my feet so much that I didn’t even know how to respond. They think that if their stupid browsers cromite, bromite, via, kiwi and other junk have a “userscript support” tab, it should work. Thank you @pointydev and many other caring users. I tried for you all. For me personally, most of the functionality is unclaimed. My script is not the center of youtube solutions. There are other far more promising projects. I believe in them. Perhaps their hands will be stronger than mine Thanks all, |
Shame to see it disappear, and you've got to hate it when some entitled people ruin it for the rest of us. I guess we'll have to find an alternative that can come close, which is a doubt, but you never know. You deserve a long "break" and if you don't come back then adios and take care. |
omfg -.- just now tried to update the script in another profile then visited greasyfork. well so it goes. all the best. thanks for all the fish |
Another one bites the dust. I started over 10 years ago with Youtube Center, then Youtube Plus which turned into Iridium and now I'm on Nova. This is such incredibly sad news. I can't live without:
And just wildly useful feature that make Youtube more tolerable:
Of course this is a small selection of all the wonderful things this script can do. PS: As a small bit of 'criticism', I think perhaps you've let Nova get too big. This is what happened to Youtube Center. Yeppha was constantly taking people's requests for tiny, niche features and implementing them, then getting overwhelmed and I feel like that's been the case for Nova as well. In any case, Thank You very much for all the work you've done on Nova!! I hope you come back to it at some point. |
This script is amazing! It's disappointing that you won't be maintaining it for the near future, but I want to express my gratitude. Your work has really made my YouTube experience exceptional, I hope you return someday. Take care! |
Hey guys! I don’t know when release a new Nova ver. But here's an intermediate one. But there is no particular meaning to its installation. Because it does not expand the functionality, but deepens the internal mechanisms. That is, optimization and load reduction. That is, there is practically nothing new there. But some things are “damaged” because they are simply unfinished. In recent months, I have put all my energy into local LLMs. I have a lot to say. From initial surprise and delight to disappointment and humiliation. LLM has its own github analogue (repository) - https://huggingface.co/ and package manager (analogous to git) - https://ollama.com/
the listed GUIs are written in Electron. The only difference is in UX of those that I recommend for code is:
About Nova. In truth, that conflict was just the last straw. I began to feel that I had reached the limit of the current capabilities of the script. The current functionality is superior to what I actually use. Of course, there is always the possibility of development (for example, using shadow DOM or intercepting and changing data sending yt framework) but they are disproportionate to the resources expended. For the next high-quality development, global changes in the architecture and operating principles of the script are necessary. But I don't want to do this now. And I became really interested in how other users would solve these problems. p.s. Thank you all for your support. If I return, it will be thanks to all of you. |
Thanks for the update on how things are going @raingart, glad to see you've been keeping yourself busy. I'll check out the dev build if I end up running into the TrustedHTML error (haven't so far). Appreciate you. <3 |
yoooo buddy. tysm. hope ur doing well. so far the few things I need from Nova still work 😅
you need to step away a little bit, figuratively, and dont take everything so serious.
I'm a pretty altruistic person, but if I had an opensource project and maybe the donations wouldn't do so well in comparison to the bitching, I really don't know how long I would keep at it. |
Thanks for the update @raingart. It's great to hear that you're exploring new opportunities and we're looking forward to seeing what the future holds for you and Nova. Take care and thanks for keeping in touch! 👊 |
I really love Nova. It's just so great and nothing else works nearly as well. Very sad to hear that all the typical, stupid internet shit finally got to you. Certainly understandable, but still such a tragic loss. Always remember that what you created is a remarkable achievement. No low-life spammer or 13 year-old griefer can ever take that away from you. I truly hope someday you'll return. |
Hi guys! check the beta ver I'm back from the "the Warp"! I wanted to say a lot, but I'm sitting here not knowing what to write. Perhaps it was very difficult for me to return to Nova. The script is incredibly huge! Let's start in order. A lot of text about LLMs. I understood that I was carried away there, and I moved it to the end of the message. I only managed to put together a working version of Nova recently. Before that, many parts were "disassembled." Initially, I wanted to review all plugins and rewrite them, but I ran out of strength. I only partially resumed work on them. Unfortunately, LLMs were useless for this. There were many things in the external mechanisms that were optimized, which really took as much time as writing them from scratch. Many things were written at different times, and there was no "unity" among them. I initially kept a changelog, but at some point, I stopped because the fixes affected almost all plugins. As users, you may not notice any changes except for the fact that the settings page URL has changed. This is related to the fact that the data synchronization method has changed and is not compatible with the old one. This has reduced situations where plugins seemed to not load and the page was empty. If you've had such issues. Also, a lot of work has been done on optimization. Not that the results are immediately noticeable, but theoretically, YouTube should now work with Nova both with and without it at the same speed. I wanted to prepare a 'gift' for you all by Christmas, but this is all I managed to do. Nothing new has been added except for one plugin that, in essence, is part of another. I was just about to be overwhelmed by this mess when you sent your request. Here's the list of new features, excluding fixes:
But don't think that I just gave up on Nova; it was so difficult to work on it that I created several libraries and extensions. And I swear to you that I didn't spend more than two days on each one.
There is another extension that works, but I’m too lazy to add dragging the position of elements and moving between tabs. By the way, I tried using whisper.cpp to generate subtitles for videos. It turned out that the quality of subtitles is no better than what YouTube itself generates, only the generation speed is slower Basically, the beta ver is kind of working, lol. Write to me if you want to fix anything or add new features. I'll try my best. I haven't looked at what's new for YouTube in over half a year. Maybe there are already better scripts or extensions out there. Or parts that have "stolen" from me, for example, I just found out who took the music identification module from me. Of course, without mentioning the author. So there's no need to lie, because besides me, no one else is that idiotic method. Ask: When will there be a normal release? Answer: It's going to be "sometime." LLMWhat am I talking about and how does it work - https://www.youtube.com/watch?v=UZDiGooFs54 For the first two months, I just left it alone, I had no motivation. Then, at the end of summer, I finally took matters into my own hands and started to make something happen. Ultimately, I disassembled the script into parts and began experimenting with a local, less sophisticated Large Language Model (LLM) for refactoring code with the directive 'refactor'. The result initially seemed very good to me; the code became understandable and more organized. However, this was a mistake. There were both logical and practical errors. I sent entire plugins in hope that they would understand and grasp the logic of the separate logical parts. Oh, it took me probably another two months to fix what they missed and to reconsider my methods. However, this was excellent testing for LLMs. I tested all LLMs up to 25 billion parameters, and many of them simply discarded parts of the code. This affected even larger LLMs like command-r and gemini. GPT-4 also didn't impress me. Out of about 30 LLMs I tested in practice, only around 5 turned out to be relatively decent, not destroying parts of the code or doing so minimally. At least they're not complete garbage. Moreover, I tested only on code, RP, RAG; RP is another story. If you see that a model is strong in prose and writing, that's usually just hype and it's garbage good only for trivial texts. Also, I note that fine-tuning all models doesn't surpass stock models; in fact, most of them are worse. It's like alchemy where you randomly mix things in the hope of getting gold, but end up with chimera-like abominations at best. I don't have resources, knowledge, or even datasets. It's amusing when they "educate" models with 100-10k of their examples/books. This is such a tiny fraction of the data that it's negligible in terms of error margin. Progress has been made in image generation. The results are much more tangible. What's more, I've even quantified several Stable Diffusion models. LoRA is indeed a simple and effective fine-tune. But FLUX is a major breakthrough that approaches closed models like MidJourney & Dall-E. MoE (Mixture of Experts) for personal or local use is garbage; it's purely a service-oriented LLM suitable for hosting, capable of quickly and efficiently solving various tasks for different users. Overall, they're worse in every way except speed. They're easily distinguishable. Either their name explicitly states it or it's indicated by an abbreviation like 8x20B, where '8' indicates the number of 'experts'. As for model quantization or compression, GGUF is an uncontested winner with a significant lead. It's not the best, but it's the most accessible. If you're downloading, choose As for running models, use the simplest like As for which models to use, definitely go for the 7B models. They're the minimum that's practically useful. Anything below I consider garbage. It's better not to waste time on them and simply use models via API, which are almost always free. Regarding uncensored models ( As for 1-quantum bits, it's a clever way to elevate a highly quantized model to q4. It's better than ordinary q2-q3 quantizations, but it's still garbage. It might be suitable if you're determined to run a large model. Where to download models? On the Hugging Face website, you can find them at bartowski or QuantFactory for older models if you're into antiquity. But I'd check with the publishers themselves. That's basically all I have to say about my advice on LLMs; it's only a small part. But English is a priority. No, the response won't be faster and optimal; with other languages, the tokenizer may not be very compatible, but that's not an axiom. And here's a list of recommended models that I might suggest:
And there are different types of the same model. Always choose either the Here are some soft LLMs:
What's possible right now? Generating music is very popular right now. Generating video frames is not yet very successful for people. Animating static images is easier than creating from scratch. |
wow great you're still working on this. at some point you told me "there's always Iridium".
who are you telling lol great rundown on the ai stuff 👍 |
I also found it funny when, a few months later, I opened the code editor and the number of lines in the plugins was written below: 1200, ok... next plugins: 800, 900, 4000 - why the hell is there so much? It's interesting that, while I'm listening to a podcast by Richard Feynman from 1985, I had a flashback of what I'm observing right now with transformer models |
was never the biggest fan of Feynman but I liked his "ice is slippery" interview. but whatever gets the gears going. |
I met another crazy person on reddit who claimed that imatrix quants are better than K-quanta. The funny thing is that just half a year ago those who chose i-quants were called fanatics. But as always, you can chat on the Internet without reading the instructions and without even conducting experiments. After all, these cretins are above reading some kind of manual, The answer to the question: why use quantization (lossy compression) at all? here is the official wiki llama.cpp comparison of i-quants vs k-quants and the information I found about quantization in the form of graphs: and here is the graph I personally compiled based on many runs of my own synthetic questions: my personal observations and conclusions. Let me immediately mention that I am not a holder of a doctorate degree in LLM, and I could be wrong IQ4_XS is not worse than Q4_K_M, and the difference is barely noticeable. However, I personally recommend using Q5_K_M. It's a fine line there where the quality is still not so bad that you'd notice it. Q6 is noticeably more noticeable, although not significantly so, in terms of slowdown. Also, there's a very crude but incredibly precise statement from an anonymous commenter: 'Shitty question - shitty answer.' The less accurately a question or prompt is composed, the worse the final answer will be. In the model, there's no crystal ball that reads your thoughts. And if you're not sure about the answer (which is obviously the case since the principle of LLM is to generate text similar to an answer), it's imperative to check it in another model rather than starting a new conversation with the current model. Regarding image generation models, if the model doesn't produce an image in one or two steps ("one-shot"), then any form of quantization is acceptable. However, the models are already quite compressed, and the differences in size between, for example, gguf q5 and q8 are practically negligible. Additionally, during image generation, it uses up the entire available VRAM with the resolution output. This means that the models are highly resource-intensive for the resolution they produce. I hope this "wisdom" will be useful to you. After all, understanding this didn't require me to climb mountains to see old sages; I just had to sift through the trash of internet comments. |
Just quickly jumping back to the original issue topic, is there any chance we could get a userscript build hosted on GitHub or GreasyFork (again)? I'm still having ratelimit issues with OpenUserJS. If you were to go down the GitHub hosted route, perhaps you could have a more "official" way to host the beta version so that more people could try it out (without having to dig for it through this issue lol). Additionally, the GreasyFork badge in the readme is broken, in case you hadn't noticed. Thanks again! |
@pointydev hi, happy New Year to you! Currently, there is a dev available (click on the The situation is such that OpenUserJS, which was working fine years ago, started to output only a part of the script. When users updated, they ended up with a truncated portion of the script, which didn't work (#58 #issuecomment-1443198053). By the way, this isn't the first time I've had to remove a script from GreasyFork due to similar issues (#6 #issuecomment-892596260). But back then, the reason was different. An author from India, whose script I had reported to the site's administration for injecting malicious code or ads for certain regions, started spamming me after that. Therefore, I recommend finding scripts through this website: https://www.userscript.zone/ Btw, I recently corresponded with the author of "Enhancer for YouTube™" (https://chromewebstore.google.com/detail/enhancer-for-youtube/ponfpcnoihfmfllpaingbgckeeldkhle). There was a conflict between some of its plugins and his extention. But it seems like I've resolved that issue. The author has already translated many plugins into French and, according to analytics, 2 users from France will be happy with that. He also mentioned that he would try to implement some parts from Nova. So, my friends, you'll have another very potentially great alternative! Guys, do you have any desires to implement or fix something, I can do it. But the basic things should work. From what I saw, only some things that were tied to the previous ver of the YT template were broken. By the way, I only looked at greasyfork and the level of scripts there has increased significantly. It can be seen that many are written LLM. But the code is really not bad. And if what my friends tell me that they tested o3 from OpenAI is true, then this model can solve complex programming problems. Approaching the average level of a programmer. That is, a few requests will be enough to implement a lot of things. |
Guys, over the past month, I tested 5 'Reasoning' LLMs (Chain of Thought) and overall, it's a prank. They practically don't outperform regular LLMs in any practical tasks. The only exception is mathematics. Indeed, in such tasks, the advantage is obvious. But for example, in programming, it's complete garbage. They just 'consume' tokens in their response, comparable to a fast response without reasoning. I thought I'd keep my findings to myself, in case it's just my results, but after seeing this post here and this image that perfectly embodies my experience So the panic about the coal mine is canceled for now... ========== I think I'm already fed up with you LLM. to those written in flutter, that is, potentially universal (compatible with iOS): |
Plain and simple, the userscript is no longer available on GreasyFork (returning 404). Personally, I have ratelimit issues with OpenUserJS due to my IP, could we get a fully compiled version on GitHub (perhaps in a separate repository or using GitHub releases)?
Thanks,
pointy
Thank you for all your hard work @raingart, I wish you well with whatever you decide to do in the future.
The text was updated successfully, but these errors were encountered: