Skip to content

Commit

Permalink
update Reddit feeds
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Feb 9, 2025
1 parent c71f258 commit 2a430ef
Show file tree
Hide file tree
Showing 2 changed files with 54 additions and 54 deletions.
80 changes: 40 additions & 40 deletions feeds/LocalLLaMA.xml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
<feed xmlns="http://www.w3.org/2005/Atom">
<id>/r/LocalLLaMA/.rss</id>
<title>LocalLlama</title>
<updated>2025-02-09T21:05:52+00:00</updated>
<updated>2025-02-09T21:21:46+00:00</updated>
<link href="https://old.reddit.com/r/LocalLLaMA/" rel="alternate"/>
<generator uri="https://lkiesow.github.io/python-feedgen" version="1.0.0">python-feedgen</generator>
<icon>https://www.redditstatic.com/icon.png/</icon>
Expand Down Expand Up @@ -33,19 +33,6 @@
<category term="LocalLLaMA" label="r/LocalLLaMA"/>
<published>2025-02-09T19:59:58+00:00</published>
</entry>
<entry>
<id>t3_1ilnkya</id>
<title>Whats the biggest size LLM at Q4 KM or higher fittable on 16GB VRAM?</title>
<updated>2025-02-09T19:57:38+00:00</updated>
<author>
<name>/u/meta_voyager7</name>
<uri>https://old.reddit.com/user/meta_voyager7</uri>
</author>
<content type="html">&lt;!-- SC_OFF --&gt;&lt;div class="md"&gt;&lt;p&gt;GPU is Nvidia 5080. Main use case in order of priority * Coding assistance using Roo code and continue * Creative Writing in English.&lt;/p&gt; &lt;p&gt;Should have &amp;gt; 10 tokens/ second inference speed. 1. What's the biggest size LLM at Q4 KM fittable on 16GB VRAM? 2. Which LLM at this size and quant would you suggest?&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href="https://old.reddit.com/user/meta_voyager7"&gt; /u/meta_voyager7 &lt;/a&gt; &lt;br /&gt; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1ilnkya/whats_the_biggest_size_llm_at_q4_km_or_higher/"&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1ilnkya/whats_the_biggest_size_llm_at_q4_km_or_higher/"&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content>
<link href="https://old.reddit.com/r/LocalLLaMA/comments/1ilnkya/whats_the_biggest_size_llm_at_q4_km_or_higher/"/>
<category term="LocalLLaMA" label="r/LocalLLaMA"/>
<published>2025-02-09T19:57:38+00:00</published>
</entry>
<entry>
<id>t3_1il9k56</id>
<title>I built a Spotify agent with 50 lines of YAML and an open source model.</title>
Expand All @@ -59,6 +46,19 @@
<category term="LocalLLaMA" label="r/LocalLLaMA"/>
<published>2025-02-09T07:36:06+00:00</published>
</entry>
<entry>
<id>t3_1ilnkya</id>
<title>Whats the biggest size LLM at Q4 KM or higher fittable on 16GB VRAM?</title>
<updated>2025-02-09T19:57:38+00:00</updated>
<author>
<name>/u/meta_voyager7</name>
<uri>https://old.reddit.com/user/meta_voyager7</uri>
</author>
<content type="html">&lt;!-- SC_OFF --&gt;&lt;div class="md"&gt;&lt;p&gt;GPU is Nvidia 5080. Main use case in order of priority * Coding assistance using Roo code and continue * Creative Writing in English.&lt;/p&gt; &lt;p&gt;Should have &amp;gt; 10 tokens/ second inference speed. 1. What's the biggest size LLM at Q4 KM fittable on 16GB VRAM? 2. Which LLM at this size and quant would you suggest?&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href="https://old.reddit.com/user/meta_voyager7"&gt; /u/meta_voyager7 &lt;/a&gt; &lt;br /&gt; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1ilnkya/whats_the_biggest_size_llm_at_q4_km_or_higher/"&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1ilnkya/whats_the_biggest_size_llm_at_q4_km_or_higher/"&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content>
<link href="https://old.reddit.com/r/LocalLLaMA/comments/1ilnkya/whats_the_biggest_size_llm_at_q4_km_or_higher/"/>
<category term="LocalLLaMA" label="r/LocalLLaMA"/>
<published>2025-02-09T19:57:38+00:00</published>
</entry>
<entry>
<id>t3_1ila7j9</id>
<title>Which open source image generation model is the best? Flux, Stable diffusion, Janus-pro or something else? What do you suggest guys?</title>
Expand All @@ -85,19 +85,6 @@
<category term="LocalLLaMA" label="r/LocalLLaMA"/>
<published>2025-02-09T14:42:43+00:00</published>
</entry>
<entry>
<id>t3_1illp96</id>
<title>voice-to-LLM coding assistant for any GUI text editor</title>
<updated>2025-02-09T18:40:14+00:00</updated>
<author>
<name>/u/clickitongue</name>
<uri>https://old.reddit.com/user/clickitongue</uri>
</author>
<content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1illp96/voicetollm_coding_assistant_for_any_gui_text/"&gt; &lt;img alt="voice-to-LLM coding assistant for any GUI text editor" src="https://external-preview.redd.it/t4ScQF1hipu9Ja_H17ev9BazNgQd95dDqbpVkq1rzt8.jpg?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=421c316ff82503cb6f332f752533ebb32704eccd" title="voice-to-LLM coding assistant for any GUI text editor" /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href="https://old.reddit.com/user/clickitongue"&gt; /u/clickitongue &lt;/a&gt; &lt;br /&gt; &lt;span&gt;&lt;a href="https://github.com/farfetchd/clickitongue?tab=readme-ov-file#voice-to-llm-code-focused-typing"&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1illp96/voicetollm_coding_assistant_for_any_gui_text/"&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content>
<link href="https://old.reddit.com/r/LocalLLaMA/comments/1illp96/voicetollm_coding_assistant_for_any_gui_text/"/>
<category term="LocalLLaMA" label="r/LocalLLaMA"/>
<published>2025-02-09T18:40:14+00:00</published>
</entry>
<entry>
<id>t3_1il188r</id>
<title>How Mistral, ChatGPT and DeepSeek handle sensitive topics</title>
Expand All @@ -111,6 +98,19 @@
<category term="LocalLLaMA" label="r/LocalLLaMA"/>
<published>2025-02-08T23:46:11+00:00</published>
</entry>
<entry>
<id>t3_1illp96</id>
<title>voice-to-LLM coding assistant for any GUI text editor</title>
<updated>2025-02-09T18:40:14+00:00</updated>
<author>
<name>/u/clickitongue</name>
<uri>https://old.reddit.com/user/clickitongue</uri>
</author>
<content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1illp96/voicetollm_coding_assistant_for_any_gui_text/"&gt; &lt;img alt="voice-to-LLM coding assistant for any GUI text editor" src="https://external-preview.redd.it/t4ScQF1hipu9Ja_H17ev9BazNgQd95dDqbpVkq1rzt8.jpg?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=421c316ff82503cb6f332f752533ebb32704eccd" title="voice-to-LLM coding assistant for any GUI text editor" /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href="https://old.reddit.com/user/clickitongue"&gt; /u/clickitongue &lt;/a&gt; &lt;br /&gt; &lt;span&gt;&lt;a href="https://github.com/farfetchd/clickitongue?tab=readme-ov-file#voice-to-llm-code-focused-typing"&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1illp96/voicetollm_coding_assistant_for_any_gui_text/"&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content>
<link href="https://old.reddit.com/r/LocalLLaMA/comments/1illp96/voicetollm_coding_assistant_for_any_gui_text/"/>
<category term="LocalLLaMA" label="r/LocalLLaMA"/>
<published>2025-02-09T18:40:14+00:00</published>
</entry>
<entry>
<id>t3_1ikyq45</id>
<title>DeepSeek Gained over 100+ Millions Users in 20 days.</title>
Expand Down Expand Up @@ -163,19 +163,6 @@
<category term="LocalLLaMA" label="r/LocalLLaMA"/>
<published>2025-02-09T00:33:42+00:00</published>
</entry>
<entry>
<id>t3_1iljyiw</id>
<title>Inspired by the poor man's build, decided to give it a go 6U, p104-100 build!</title>
<updated>2025-02-09T17:27:40+00:00</updated>
<author>
<name>/u/onsit</name>
<uri>https://old.reddit.com/user/onsit</uri>
</author>
<content type="html">&lt;!-- SC_OFF --&gt;&lt;div class="md"&gt;&lt;p&gt;Had a bunch of leftover odds and ends from the crypto craze, mostly riser cards, 16awg 8pin / 6pins. Have a 4u case, but found it a bit cramped the layout of the supermicro board.&lt;/p&gt; &lt;p&gt;Found this 6U case on ebay, which seems awesome as I can cut holes in the GPU riser shelf and just move to regular Gen 3 ribbon risers. But for now the 1x risers are fine for inference.&lt;/p&gt; &lt;ul&gt; &lt;li&gt;E5-2680v4&lt;/li&gt; &lt;li&gt;Supermicro X10SRL-F&lt;/li&gt; &lt;li&gt;256gb DDR4 2400 RDIMMs&lt;/li&gt; &lt;li&gt;1 tb NVME in pcie adapter&lt;/li&gt; &lt;li&gt;6x p104-100 with 8gb bios = 48gb VRAM&lt;/li&gt; &lt;li&gt;430 ATX PSU to power the motherboard&lt;/li&gt; &lt;li&gt;x11 breakout board, with turn on signal from PSU&lt;/li&gt; &lt;li&gt;1200 watt HP PSU powering the risers and GPUs&lt;/li&gt; &lt;/ul&gt; &lt;p&gt;The 6U case is ok, not the best quality when compared to the Rosewill 4u I have. But the double decker setup is really what I was going for. Lack of an IO sheild and complications will arise due to no room for full length PCIes, but if my goal is to use ribbon risers who cares.&lt;/p&gt; &lt;p&gt;All in pretty cheap build, with RTX3090s are too expensive, between 800-1200 now. P40s are 400 now, P100 also stupid expensive.&lt;/p&gt; &lt;ul&gt; &lt;li&gt;&lt;a href="https://imgur.com/Q8EAzaU.jpg"&gt;Imgur&lt;/a&gt;&lt;/li&gt; &lt;li&gt;&lt;a href="https://imgur.com/r7dwfv6.jpg"&gt;Imgur&lt;/a&gt;&lt;/li&gt; &lt;li&gt;&lt;a href="https://imgur.com/Tp7sg9X.jpg"&gt;Imgur&lt;/a&gt;&lt;/li&gt; &lt;li&gt;&lt;a href="https://imgur.com/D1s1r9r.jpg"&gt;Imgur&lt;/a&gt;&lt;/li&gt; &lt;/ul&gt; &lt;p&gt;This was a relatively cost efficient build, still putting me under the cost of 1 RTX3090, and giving me room to grow to better cards.&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href="https://old.reddit.com/user/onsit"&gt; /u/onsit &lt;/a&gt; &lt;br /&gt; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1iljyiw/inspired_by_the_poor_mans_build_decided_to_give/"&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1iljyiw/inspired_by_the_poor_mans_build_decided_to_give/"&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content>
<link href="https://old.reddit.com/r/LocalLLaMA/comments/1iljyiw/inspired_by_the_poor_mans_build_decided_to_give/"/>
<category term="LocalLLaMA" label="r/LocalLLaMA"/>
<published>2025-02-09T17:27:40+00:00</published>
</entry>
<entry>
<id>t3_1ikvnfx</id>
<title>I really need to upgrade</title>
Expand All @@ -189,6 +176,19 @@
<category term="LocalLLaMA" label="r/LocalLLaMA"/>
<published>2025-02-08T19:38:41+00:00</published>
</entry>
<entry>
<id>t3_1iljyiw</id>
<title>Inspired by the poor man's build, decided to give it a go 6U, p104-100 build!</title>
<updated>2025-02-09T17:27:40+00:00</updated>
<author>
<name>/u/onsit</name>
<uri>https://old.reddit.com/user/onsit</uri>
</author>
<content type="html">&lt;!-- SC_OFF --&gt;&lt;div class="md"&gt;&lt;p&gt;Had a bunch of leftover odds and ends from the crypto craze, mostly riser cards, 16awg 8pin / 6pins. Have a 4u case, but found it a bit cramped the layout of the supermicro board.&lt;/p&gt; &lt;p&gt;Found this 6U case on ebay, which seems awesome as I can cut holes in the GPU riser shelf and just move to regular Gen 3 ribbon risers. But for now the 1x risers are fine for inference.&lt;/p&gt; &lt;ul&gt; &lt;li&gt;E5-2680v4&lt;/li&gt; &lt;li&gt;Supermicro X10SRL-F&lt;/li&gt; &lt;li&gt;256gb DDR4 2400 RDIMMs&lt;/li&gt; &lt;li&gt;1 tb NVME in pcie adapter&lt;/li&gt; &lt;li&gt;6x p104-100 with 8gb bios = 48gb VRAM&lt;/li&gt; &lt;li&gt;430 ATX PSU to power the motherboard&lt;/li&gt; &lt;li&gt;x11 breakout board, with turn on signal from PSU&lt;/li&gt; &lt;li&gt;1200 watt HP PSU powering the risers and GPUs&lt;/li&gt; &lt;/ul&gt; &lt;p&gt;The 6U case is ok, not the best quality when compared to the Rosewill 4u I have. But the double decker setup is really what I was going for. Lack of an IO sheild and complications will arise due to no room for full length PCIes, but if my goal is to use ribbon risers who cares.&lt;/p&gt; &lt;p&gt;All in pretty cheap build, with RTX3090s are too expensive, between 800-1200 now. P40s are 400 now, P100 also stupid expensive.&lt;/p&gt; &lt;ul&gt; &lt;li&gt;&lt;a href="https://imgur.com/Q8EAzaU.jpg"&gt;Imgur&lt;/a&gt;&lt;/li&gt; &lt;li&gt;&lt;a href="https://imgur.com/r7dwfv6.jpg"&gt;Imgur&lt;/a&gt;&lt;/li&gt; &lt;li&gt;&lt;a href="https://imgur.com/Tp7sg9X.jpg"&gt;Imgur&lt;/a&gt;&lt;/li&gt; &lt;li&gt;&lt;a href="https://imgur.com/D1s1r9r.jpg"&gt;Imgur&lt;/a&gt;&lt;/li&gt; &lt;/ul&gt; &lt;p&gt;This was a relatively cost efficient build, still putting me under the cost of 1 RTX3090, and giving me room to grow to better cards.&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href="https://old.reddit.com/user/onsit"&gt; /u/onsit &lt;/a&gt; &lt;br /&gt; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1iljyiw/inspired_by_the_poor_mans_build_decided_to_give/"&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href="https://old.reddit.com/r/LocalLLaMA/comments/1iljyiw/inspired_by_the_poor_mans_build_decided_to_give/"&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content>
<link href="https://old.reddit.com/r/LocalLLaMA/comments/1iljyiw/inspired_by_the_poor_mans_build_decided_to_give/"/>
<category term="LocalLLaMA" label="r/LocalLLaMA"/>
<published>2025-02-09T17:27:40+00:00</published>
</entry>
<entry>
<id>t3_1ilbrxo</id>
<title>LynxHub: Now support Open-WebUI with full configurations</title>
Expand Down
28 changes: 14 additions & 14 deletions feeds/ollama.xml
Original file line number Diff line number Diff line change
Expand Up @@ -2,24 +2,11 @@
<feed xmlns="http://www.w3.org/2005/Atom">
<id>/r/ollama/.rss</id>
<title>ollama</title>
<updated>2025-02-09T21:05:53+00:00</updated>
<updated>2025-02-09T21:21:46+00:00</updated>
<link href="https://old.reddit.com/r/ollama/" rel="alternate"/>
<generator uri="https://lkiesow.github.io/python-feedgen" version="1.0.0">python-feedgen</generator>
<icon>https://www.redditstatic.com/icon.png/</icon>
<subtitle>Atom feed for r/ollama</subtitle>
<entry>
<id>t3_1ikqjaj</id>
<title>PlanExe: breakdown a description into a detailed plan, WBS, SWOT.</title>
<updated>2025-02-08T16:03:42+00:00</updated>
<author>
<name>/u/neoneye2</name>
<uri>https://old.reddit.com/user/neoneye2</uri>
</author>
<content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href="https://old.reddit.com/r/ollama/comments/1ikqjaj/planexe_breakdown_a_description_into_a_detailed/"&gt; &lt;img alt="PlanExe: breakdown a description into a detailed plan, WBS, SWOT." src="https://preview.redd.it/ho8hbemkuxhe1.jpeg?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=e47f8681c23fcdbbae6a0196536c121bbab3aa24" title="PlanExe: breakdown a description into a detailed plan, WBS, SWOT." /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href="https://old.reddit.com/user/neoneye2"&gt; /u/neoneye2 &lt;/a&gt; &lt;br /&gt; &lt;span&gt;&lt;a href="https://i.redd.it/ho8hbemkuxhe1.jpeg"&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href="https://old.reddit.com/r/ollama/comments/1ikqjaj/planexe_breakdown_a_description_into_a_detailed/"&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content>
<link href="https://old.reddit.com/r/ollama/comments/1ikqjaj/planexe_breakdown_a_description_into_a_detailed/"/>
<category term="ollama" label="r/ollama"/>
<published>2025-02-08T16:03:42+00:00</published>
</entry>
<entry>
<id>t3_1il3mpl</id>
<title>70 Page PDF Refuses to Be Processed via Ollama CLI</title>
Expand Down Expand Up @@ -267,6 +254,19 @@
<category term="ollama" label="r/ollama"/>
<published>2025-02-09T20:25:10+00:00</published>
</entry>
<entry>
<id>t3_1ilp9bc</id>
<title>script to import / export models between devices locally</title>
<updated>2025-02-09T21:08:35+00:00</updated>
<author>
<name>/u/nahushrk</name>
<uri>https://old.reddit.com/user/nahushrk</uri>
</author>
<content type="html">&lt;!-- SC_OFF --&gt;&lt;div class="md"&gt;&lt;p&gt;wanted to share this simple scrip that lets you export the models downloaded to a machine to another machine without re-downloading it again&lt;/p&gt; &lt;p&gt;particularly useful when models are large and/or you want to share the models locally, saves time and bandwidth&lt;/p&gt; &lt;p&gt;just make sure the ollama version is same on both machines in case the storage mechanism changes&lt;/p&gt; &lt;p&gt;&lt;a href="https://gist.github.com/nahushrk/5d980e676c4f2762ca385bd6fb9498a9"&gt;https://gist.github.com/nahushrk/5d980e676c4f2762ca385bd6fb9498a9&lt;/a&gt;&lt;/p&gt; &lt;p&gt;the way this works:&lt;/p&gt; &lt;ul&gt; &lt;li&gt;export a model by name and size &lt;/li&gt; &lt;li&gt;a .tar file is created in dir where you ran this script&lt;/li&gt; &lt;li&gt;copy .tar file and this script to another machine &lt;/li&gt; &lt;li&gt;run import subcommand pointing to .tar file &lt;/li&gt; &lt;li&gt;run ollama list to see new model being added&lt;/li&gt; &lt;/ul&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href="https://old.reddit.com/user/nahushrk"&gt; /u/nahushrk &lt;/a&gt; &lt;br /&gt; &lt;span&gt;&lt;a href="https://old.reddit.com/r/ollama/comments/1ilp9bc/script_to_import_export_models_between_devices/"&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href="https://old.reddit.com/r/ollama/comments/1ilp9bc/script_to_import_export_models_between_devices/"&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content>
<link href="https://old.reddit.com/r/ollama/comments/1ilp9bc/script_to_import_export_models_between_devices/"/>
<category term="ollama" label="r/ollama"/>
<published>2025-02-09T21:08:35+00:00</published>
</entry>
<entry>
<id>t3_1il0zea</id>
<title>Just released an open-source Mac client for Ollama built with Swift/SwiftUI</title>
Expand Down

0 comments on commit 2a430ef

Please sign in to comment.