Skip to content

Commit

Permalink
update Reddit feeds
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Feb 9, 2025
1 parent 4534528 commit 24db80f
Show file tree
Hide file tree
Showing 2 changed files with 78 additions and 78 deletions.
102 changes: 51 additions & 51 deletions feeds/LocalLLaMA.xml
Original file line number Diff line number Diff line change
Expand Up @@ -2,24 +2,11 @@
<feed xmlns="http://www.w3.org/2005/Atom">
<id>/r/LocalLLaMA/.rss</id>
<title>LocalLlama</title>
<updated>2025-02-09T20:05:33+00:00</updated>
<updated>2025-02-09T20:22:46+00:00</updated>
<link href="https://old.reddit.com/r/LocalLLaMA/" rel="alternate"/>
<generator uri="https://lkiesow.github.io/python-feedgen" version="1.0.0">python-feedgen</generator>
<icon>https://www.redditstatic.com/icon.png/</icon>
<subtitle>Subreddit to discuss about Llama, the large language model created by Meta AI.</subtitle>
<entry>
<id>t3_1iljp5v</id>
<title>LLM Stack</title>
<updated>2025-02-09T17:16:40+00:00</updated>
<author>
<name>/u/BalaelGios</name>
<uri>https://old.reddit.com/user/BalaelGios</uri>
</author>
<content type="html">&lt;!-- SC_OFF --&gt;&lt;div class="md"&gt;&lt;p&gt;Is any census on the generally “all around” most effective 70b model?&lt;/p&gt; &lt;p&gt;Or even models like mistral small that claim to be as good as or better than llama 3.3 70b for example.&lt;/p&gt; &lt;p&gt;Seems in benchmarks every model claims to beat every other haha. &lt;/p&gt; &lt;p&gt;I’m leaning toward keeping my stack at &lt;/p&gt; &lt;p&gt;Qwen 2.5 Coder 1.5b - for autocomplete on code, I’ve tested a few different ones for this and can’t say anything is significantly better, and this model is pretty much instant suggestions for me. Sometimes it’s a total miss other times it’s good.&lt;/p&gt; &lt;p&gt;Qwen 2.5 Coder 32b - I’ve stuck with this for code assistant, debugging, unit tests etc. Pretty happy with it not had any reason to try alternatives it’s done well with everything I’ve asked of it this far.&lt;/p&gt; &lt;p&gt;A general all around model though I’m a little more unsure haha. I’m leaning toward just using Llama 3.3, but I’m also kinda intrigued by mistral small 24b especially since it claims to beat llama 3.3 but of course it’s much faster. Realistically can a 24b model compete with a 70b model? &lt;/p&gt; &lt;p&gt;I can comfortably run a 70b at 4bit, even 6bit seems to work pretty reasonably though 4bit is a better sweet spot. &lt;/p&gt; &lt;p&gt;The R1 models I’ve tried LlAma 70b distill for daily driver they don’t seem to give better responses just longer to answer, I’m not asking complicated questions just really knowledge questions. &lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href="https://old.reddit.com/user/BalaelGios"&gt; /u/BalaelGios &lt;/a&gt; &lt;br /&gt; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1iljp5v/llm_stack/"&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1iljp5v/llm_stack/"&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content>
<link href="https://old.reddit.com/r/LocalLLaMA/comments/1iljp5v/llm_stack/"/>
<category term="LocalLLaMA" label="r/LocalLLaMA"/>
<published>2025-02-09T17:16:40+00:00</published>
</entry>
<entry>
<id>t3_1iln1lj</id>
<title>Good local LLM for text translation.</title>
Expand Down Expand Up @@ -47,17 +34,17 @@
<published>2025-02-09T04:53:07+00:00</published>
</entry>
<entry>
<id>t3_1illp96</id>
<title>voice-to-LLM coding assistant for any GUI text editor</title>
<updated>2025-02-09T18:40:14+00:00</updated>
<id>t3_1ilnkya</id>
<title>Whats the biggest size LLM at Q4 KM or higher fittable on 16GB VRAM?</title>
<updated>2025-02-09T19:57:38+00:00</updated>
<author>
<name>/u/clickitongue</name>
<uri>https://old.reddit.com/user/clickitongue</uri>
<name>/u/meta_voyager7</name>
<uri>https://old.reddit.com/user/meta_voyager7</uri>
</author>
<content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1illp96/voicetollm_coding_assistant_for_any_gui_text/"&gt; &lt;img alt="voice-to-LLM coding assistant for any GUI text editor" src="https://external-preview.redd.it/t4ScQF1hipu9Ja_H17ev9BazNgQd95dDqbpVkq1rzt8.jpg?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=421c316ff82503cb6f332f752533ebb32704eccd" title="voice-to-LLM coding assistant for any GUI text editor" /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href="https://old.reddit.com/user/clickitongue"&gt; /u/clickitongue &lt;/a&gt; &lt;br /&gt; &lt;span&gt;&lt;a href="https://github.com/farfetchd/clickitongue?tab=readme-ov-file#voice-to-llm-code-focused-typing"&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1illp96/voicetollm_coding_assistant_for_any_gui_text/"&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content>
<link href="https://old.reddit.com/r/LocalLLaMA/comments/1illp96/voicetollm_coding_assistant_for_any_gui_text/"/>
<content type="html">&lt;!-- SC_OFF --&gt;&lt;div class="md"&gt;&lt;p&gt;GPU is Nvidia 5080. Main use case in order of priority * Coding assistance using Roo code and continue * Creative Writing in English.&lt;/p&gt; &lt;p&gt;Should have &amp;gt; 10 tokens/ second inference speed. 1. What's the biggest size LLM at Q4 KM fittable on 16GB VRAM? 2. Which LLM at this size and quant would you suggest?&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href="https://old.reddit.com/user/meta_voyager7"&gt; /u/meta_voyager7 &lt;/a&gt; &lt;br /&gt; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1ilnkya/whats_the_biggest_size_llm_at_q4_km_or_higher/"&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1ilnkya/whats_the_biggest_size_llm_at_q4_km_or_higher/"&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content>
<link href="https://old.reddit.com/r/LocalLLaMA/comments/1ilnkya/whats_the_biggest_size_llm_at_q4_km_or_higher/"/>
<category term="LocalLLaMA" label="r/LocalLLaMA"/>
<published>2025-02-09T18:40:14+00:00</published>
<published>2025-02-09T19:57:38+00:00</published>
</entry>
<entry>
<id>t3_1il9k56</id>
Expand All @@ -73,17 +60,17 @@
<published>2025-02-09T07:36:06+00:00</published>
</entry>
<entry>
<id>t3_1ila7j9</id>
<title>Which open source image generation model is the best? Flux, Stable diffusion, Janus-pro or something else? What do you suggest guys?</title>
<updated>2025-02-09T08:23:15+00:00</updated>
<id>t3_1illp96</id>
<title>voice-to-LLM coding assistant for any GUI text editor</title>
<updated>2025-02-09T18:40:14+00:00</updated>
<author>
<name>/u/Outrageous-Win-3244</name>
<uri>https://old.reddit.com/user/Outrageous-Win-3244</uri>
<name>/u/clickitongue</name>
<uri>https://old.reddit.com/user/clickitongue</uri>
</author>
<content type="html">&lt;!-- SC_OFF --&gt;&lt;div class="md"&gt;&lt;p&gt;Can these models generate 4K resolution images?&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href="https://old.reddit.com/user/Outrageous-Win-3244"&gt; /u/Outrageous-Win-3244 &lt;/a&gt; &lt;br /&gt; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1ila7j9/which_open_source_image_generation_model_is_the/"&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1ila7j9/which_open_source_image_generation_model_is_the/"&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content>
<link href="https://old.reddit.com/r/LocalLLaMA/comments/1ila7j9/which_open_source_image_generation_model_is_the/"/>
<content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1illp96/voicetollm_coding_assistant_for_any_gui_text/"&gt; &lt;img alt="voice-to-LLM coding assistant for any GUI text editor" src="https://external-preview.redd.it/t4ScQF1hipu9Ja_H17ev9BazNgQd95dDqbpVkq1rzt8.jpg?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=421c316ff82503cb6f332f752533ebb32704eccd" title="voice-to-LLM coding assistant for any GUI text editor" /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href="https://old.reddit.com/user/clickitongue"&gt; /u/clickitongue &lt;/a&gt; &lt;br /&gt; &lt;span&gt;&lt;a href="https://github.com/farfetchd/clickitongue?tab=readme-ov-file#voice-to-llm-code-focused-typing"&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1illp96/voicetollm_coding_assistant_for_any_gui_text/"&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content>
<link href="https://old.reddit.com/r/LocalLLaMA/comments/1illp96/voicetollm_coding_assistant_for_any_gui_text/"/>
<category term="LocalLLaMA" label="r/LocalLLaMA"/>
<published>2025-02-09T08:23:15+00:00</published>
<published>2025-02-09T18:40:14+00:00</published>
</entry>
<entry>
<id>t3_1ilg5rw</id>
Expand All @@ -98,6 +85,19 @@
<category term="LocalLLaMA" label="r/LocalLLaMA"/>
<published>2025-02-09T14:42:43+00:00</published>
</entry>
<entry>
<id>t3_1ila7j9</id>
<title>Which open source image generation model is the best? Flux, Stable diffusion, Janus-pro or something else? What do you suggest guys?</title>
<updated>2025-02-09T08:23:15+00:00</updated>
<author>
<name>/u/Outrageous-Win-3244</name>
<uri>https://old.reddit.com/user/Outrageous-Win-3244</uri>
</author>
<content type="html">&lt;!-- SC_OFF --&gt;&lt;div class="md"&gt;&lt;p&gt;Can these models generate 4K resolution images?&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href="https://old.reddit.com/user/Outrageous-Win-3244"&gt; /u/Outrageous-Win-3244 &lt;/a&gt; &lt;br /&gt; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1ila7j9/which_open_source_image_generation_model_is_the/"&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1ila7j9/which_open_source_image_generation_model_is_the/"&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content>
<link href="https://old.reddit.com/r/LocalLLaMA/comments/1ila7j9/which_open_source_image_generation_model_is_the/"/>
<category term="LocalLLaMA" label="r/LocalLLaMA"/>
<published>2025-02-09T08:23:15+00:00</published>
</entry>
<entry>
<id>t3_1il188r</id>
<title>How Mistral, ChatGPT and DeepSeek handle sensitive topics</title>
Expand Down Expand Up @@ -229,17 +229,17 @@
<published>2025-02-09T01:07:28+00:00</published>
</entry>
<entry>
<id>t3_1ill18f</id>
<title>Great Models Think Alike and this Undermines AI Oversight</title>
<updated>2025-02-09T18:12:42+00:00</updated>
<id>t3_1ikvo8a</id>
<title>Your next home lab might have 48GB Chinese card😅</title>
<updated>2025-02-08T19:39:39+00:00</updated>
<author>
<name>/u/juanviera23</name>
<uri>https://old.reddit.com/user/juanviera23</uri>
<name>/u/Redinaj</name>
<uri>https://old.reddit.com/user/Redinaj</uri>
</author>
<content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1ill18f/great_models_think_alike_and_this_undermines_ai/"&gt; &lt;img alt="Great Models Think Alike and this Undermines AI Oversight" src="https://external-preview.redd.it/BvPrweB4u2rzXxGIWqF_D8vwdPandRjeXx7kZVQLVZc.jpg?width=216&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=3074d0e0dcf27a6d3d02265c9d1155726b988eea" title="Great Models Think Alike and this Undermines AI Oversight" /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href="https://old.reddit.com/user/juanviera23"&gt; /u/juanviera23 &lt;/a&gt; &lt;br /&gt; &lt;span&gt;&lt;a href="https://paperswithcode.com/paper/great-models-think-alike-and-this-undermines"&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1ill18f/great_models_think_alike_and_this_undermines_ai/"&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content>
<link href="https://old.reddit.com/r/LocalLLaMA/comments/1ill18f/great_models_think_alike_and_this_undermines_ai/"/>
<content type="html">&lt;!-- SC_OFF --&gt;&lt;div class="md"&gt;&lt;p&gt;&lt;a href="https://wccftech.com/chinese-gpu-manufacturers-push-out-support-for-running-deepseek-ai-models-on-local-systems/"&gt;https://wccftech.com/chinese-gpu-manufacturers-push-out-support-for-running-deepseek-ai-models-on-local-systems/&lt;/a&gt;&lt;/p&gt; &lt;p&gt;Things are accelerating. China might give us all the VRAM we want. 😅😅👍🏼 Hope they don't make it illegal to import. For security sake, of course &lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href="https://old.reddit.com/user/Redinaj"&gt; /u/Redinaj &lt;/a&gt; &lt;br /&gt; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1ikvo8a/your_next_home_lab_might_have_48gb_chinese_card/"&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1ikvo8a/your_next_home_lab_might_have_48gb_chinese_card/"&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content>
<link href="https://old.reddit.com/r/LocalLLaMA/comments/1ikvo8a/your_next_home_lab_might_have_48gb_chinese_card/"/>
<category term="LocalLLaMA" label="r/LocalLLaMA"/>
<published>2025-02-09T18:12:42+00:00</published>
<published>2025-02-08T19:39:39+00:00</published>
</entry>
<entry>
<id>t3_1ilh46m</id>
Expand All @@ -254,19 +254,6 @@
<category term="LocalLLaMA" label="r/LocalLLaMA"/>
<published>2025-02-09T15:26:28+00:00</published>
</entry>
<entry>
<id>t3_1ikvo8a</id>
<title>Your next home lab might have 48GB Chinese card😅</title>
<updated>2025-02-08T19:39:39+00:00</updated>
<author>
<name>/u/Redinaj</name>
<uri>https://old.reddit.com/user/Redinaj</uri>
</author>
<content type="html">&lt;!-- SC_OFF --&gt;&lt;div class="md"&gt;&lt;p&gt;&lt;a href="https://wccftech.com/chinese-gpu-manufacturers-push-out-support-for-running-deepseek-ai-models-on-local-systems/"&gt;https://wccftech.com/chinese-gpu-manufacturers-push-out-support-for-running-deepseek-ai-models-on-local-systems/&lt;/a&gt;&lt;/p&gt; &lt;p&gt;Things are accelerating. China might give us all the VRAM we want. 😅😅👍🏼 Hope they don't make it illegal to import. For security sake, of course &lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href="https://old.reddit.com/user/Redinaj"&gt; /u/Redinaj &lt;/a&gt; &lt;br /&gt; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1ikvo8a/your_next_home_lab_might_have_48gb_chinese_card/"&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1ikvo8a/your_next_home_lab_might_have_48gb_chinese_card/"&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content>
<link href="https://old.reddit.com/r/LocalLLaMA/comments/1ikvo8a/your_next_home_lab_might_have_48gb_chinese_card/"/>
<category term="LocalLLaMA" label="r/LocalLLaMA"/>
<published>2025-02-08T19:39:39+00:00</published>
</entry>
<entry>
<id>t3_1il9h73</id>
<title>R1 (1.73bit) on 96GB of VRAM and 128GB DDR4</title>
Expand All @@ -280,6 +267,19 @@
<category term="LocalLLaMA" label="r/LocalLLaMA"/>
<published>2025-02-09T07:30:27+00:00</published>
</entry>
<entry>
<id>t3_1ill18f</id>
<title>Great Models Think Alike and this Undermines AI Oversight</title>
<updated>2025-02-09T18:12:42+00:00</updated>
<author>
<name>/u/juanviera23</name>
<uri>https://old.reddit.com/user/juanviera23</uri>
</author>
<content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1ill18f/great_models_think_alike_and_this_undermines_ai/"&gt; &lt;img alt="Great Models Think Alike and this Undermines AI Oversight" src="https://external-preview.redd.it/BvPrweB4u2rzXxGIWqF_D8vwdPandRjeXx7kZVQLVZc.jpg?width=216&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=3074d0e0dcf27a6d3d02265c9d1155726b988eea" title="Great Models Think Alike and this Undermines AI Oversight" /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href="https://old.reddit.com/user/juanviera23"&gt; /u/juanviera23 &lt;/a&gt; &lt;br /&gt; &lt;span&gt;&lt;a href="https://paperswithcode.com/paper/great-models-think-alike-and-this-undermines"&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1ill18f/great_models_think_alike_and_this_undermines_ai/"&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content>
<link href="https://old.reddit.com/r/LocalLLaMA/comments/1ill18f/great_models_think_alike_and_this_undermines_ai/"/>
<category term="LocalLLaMA" label="r/LocalLLaMA"/>
<published>2025-02-09T18:12:42+00:00</published>
</entry>
<entry>
<id>t3_1ilegz0</id>
<title>Anyone else feel like mistral is perfectly set up for maximizing consumer appeal through design? I’ve always felt that out of all the open source AI companies mistral sticks out. Now with their new app it’s really showing. Yet they seem to be behind the curve in actual capabilities.</title>
Expand Down
Loading

0 comments on commit 24db80f

Please sign in to comment.