Exactly and running many LLMs is now possible even on a Pi4, just slow.I think the point is that running neural networks with billions of parameters on the desktop is a daily use for some people. From what I can tell that sort of thing is likely to be more common as local AI gets built into desktop applications.
Running AI locally gives me more control over prompts and seed, important if you make a Stable Diffusion image you like and want to make it again.
My use case for SD is making background images for model train layouts.
Sure you can buy them for under $10 but where is the DIY fun in that?
The other option is the online services where you can get more AI images by paying a monthly fee but have no idea what random seed was used.
The best control over your AI generated output your desktop Pi/PC, image or text or speech then it needs to be local.
Grok has now been open sourced and while ~300Billion parameters might be a bit much for a desktop today it won't be long before improvements make that possible in the home.
I don't expect a Blackwell chip on a Hat+ but already Coral adds limited ML to Pi's.
With the PCIe interface a Ai chip, baby Groq/Blackwell with huge dram on a Hat+ is within reach.
Coral has been around for a while now, what is next version like?
Software improvements can reduce hardware requirements.
Onnxstream using XNNPACK means Stable diffusion XL can make an image in 3 minutes on a Pi5.
But if I use Easydiffusion to install the normal Stable Diffusion and all it's dependencies, the same prompt and seed takes 40 minutes to make the same image.
Doing all this magic on this Pi5 I use as a daily driver is on topic.
And after trying to replicate this on a x86 Debian box without success yet, well the Pi5 is easy, it just worked.
Most desktop LLM apps expect a GPU or better, duh, AI is mostly controlled by those invested in selling chips or services.
Statistics: Posted by Gavinmc42 — Tue Mar 19, 2024 10:04 pm