12 comments

  • Havoc 2 minutes ago
    > The best strategy is to shrink the model until it fits — either with EXL3 quantization or ModelOpt PTQ — and use GreenBoost's DDR4 pool for KV cache only.

    Does this make sense? I'd have thought the KV is guaranteed to be used 100% of the time while say in a MoE the same can't be said of the weights.

    Though I suppose if you're shooting for huge context then having that allocation go into ram makes sense specially when its allocated but not used yet

  • 0xbadcafebee 32 minutes ago
    You can already do this with some GPU drivers:

      GRUB_CMDLINE_LINUX_DEFAULT="quiet splash amdttm.pages_limit=5242880 ttm.pages_limit=5242880"
    
    One downside is your kernel isn't going to reserve that memory away from userland. You will still see all the memory at system level as "free". As the GPU driver starts using it, other apps/the OS will try to use the "free" memory, not knowing how much of it is in use (it may show up as "cache", or not at all). Then OOM killer starts going or programs start crashing, and at some point the OS tips over or GPU driver crashes. You can add loads of swap as a compromise and it works okay, if a bit slow.

    In any case, loading a gigantic model just to use system RAM is absurdly slow (due to mem bandwidth), like 1-5 t/s, so it's not practical. It'd take a whole day to process one 86k token request. Just pay a cloud provider $0.01 to do it in 10 seconds.

  • yjtpesesu2 51 minutes ago
    How does this differ from anything llama.cpp offers, regarding offloading layers? The repo consistently refers to "DDR4". Is there a reason DDR5 won't work with this?
    • svnt 41 minutes ago
      The readme opens with this:

      > I have an RTX 5070 with 12 GB VRAM and I wanted to run glm-4.7-flash:q8_0, which is a 31.8 GB model. The standard options are:

      > Offload layers to CPU — works, but drops token/s by 5–10× because CPU RAM has no CUDA coherence. You end up waiting. Use a smaller quantization — you lose quality. At q4_0 the model is noticeably worse on reasoning tasks.

      > Buy a bigger GPU — not realistic for consumer hardware. A 48 GB card costs more than a complete workstation.

      > None of those felt right, so I built an alternative: route the overflow memory to DDR4 via DMA-BUF, which gives the GPU direct access to system RAM over PCIe 4.0 without a CPU copy involved.

      And then limps home with this caveat on the closest thing to a benchmark:

      > The PCIe 4.0 link (~32 GB/s) is the bottleneck when the model overflows VRAM. The best strategy is to shrink the model until it fits — either with EXL3 quantization or ModelOpt PTQ — and use GreenBoost's DDR4 pool for KV cache only.

      I think the reason it refers it to DDR4 is because that is how the user explained it to their coding agent. LLMs are great at perpetuating unnecessary specificity.

    • kcb 30 minutes ago
      CUDA has had managed memory that pages between VRAM and system RAM for a decade. Problem is doing so is unusably slow for AI purposes. Seems like an unnecessary layer here.
    • xienze 41 minutes ago
      Presumably it means that software doesn’t have to write the same sort of layer offloading support. It’ll “just work” as if you had X GB of VRAM all along.
  • bhewes 1 hour ago
    This has been fun we can task our nemotron-3-super model to run over night when our desktops are idle. 4070s and 96gb of ram works fine. Slow but does it's job.
  • daneel_w 1 hour ago
    Related, a couple of years ago: https://old.reddit.com/r/Amd/comments/15t0lsm/i_turned_a_95_...

    "I turned a $95 AMD APU into a 16GB VRAM GPU and it can run stable diffusion!"

    • 3abiton 25 minutes ago
      > it can generate a 50 steps 512x512 image around 1 minute and 50 seconds.

      I have the 4650G APU, and the best way to describe it is: lacking of support. This was even more true 3 yo than now. rocm (is) was absolutely dogshit then, I know this because I tried to do the same when that post was made. You have to compile everything from scratch, get the relevant patches, and even then, xformers which is a library that accelerate diffusion model inferencing was not supported for renoir or rocm back then. Yes, you can generate an image, but it was much slower, and rigged with bugs. You couldn'rt update rocm because it broke compatibility, and it was partly the reason I got into nixos. That being said, those APUs are a power house. Nowadays I can run decent agentic workflows on them (I have 64gb of ddr4 ram, ie APU can suck as much as it needs with the latest linux kernels).

      Just note, diffusion models are still second class citizens on AMD apus even GPUs. But then again, nothing close right now on the market except for what apple offers.

  • paultendo 57 minutes ago
    Could be a very useful way to do some overnight tasks using spare RAM. Possibly things like LLM-based categorisation, labelling, data cleansing. That's what comes to mind for me anyway.
  • sabareesh 24 minutes ago
    I wish it provided benchmark comparing Direct RAM offload vs CPU offload vs Full VRAM
  • yjftsjthsd-h 1 hour ago
    Previously: https://news.ycombinator.com/item?id=47384557

    (Still cool, still would benefit from better benchmarks)

  • pabs3 2 days ago
    Would be great to get this into mainline Linux.
  • tandr 3 days ago
    Some simpler benchmark table would be great. May I suggest Ollama on base machine, Ollama with T1, Ollama with T1+T2 etc. on midsize and big models to compare token/sec?
  • aplomb1026 22 minutes ago
    [dead]
  • holoduke 1 hour ago
    The is extremely slow and not useful in my opinion.
    • daneel_w 59 minutes ago
      It makes the difference between being able to run a lot of machine learning tasks, and not being able at all. Pretty useful.
    • majorchord 1 hour ago
      I would say it depends entirely on your usecase. I don't think there can be a simple "not useful" generalization that applies to everyone.
      • jauntywundrkind 1 hour ago
        Man I wish that was a canned response that could be deployed on demand! Well said.

        I really appreciate thriftful & resourceful points of view. Exploring what if, looking for use is such a great virtue.

    • bigwheels 1 hour ago
      Can you elaborate beyond the shallow/superficial dismissal?
    • sayYayToLife 56 minutes ago
      [dead]