6 comments

  • zmmmmm 11 minutes ago
    > Talos is a custom FPGA-based hardware accelerator built from the ground up to execute Convolutional Neural Networks with extreme efficiency

    Makes it sound like it's new hardware. This is just (I'm inferring) software to program an off the shelf FPGA to do convolutions. Very minimal ones by the look of it (MNIST etc).

  • tadfisher 42 minutes ago
    My advice: write your own English prose, and try not to let "LLM-speak" leak into your documentation when using them to edit. Ironically, LLMs just plain suck at writing English, like they're incredibly overfit on marketing copy and press releases. I hope someone is working on this, or at least cares about the problem, because that would make this brave new world palatable for reading.
    • vidarh 35 minutes ago
      They really don't, if you actually bother prompting them. Give them a voice sample, and tell them to match the tone, and you already get something 10x better. Have them revise with a list of common writing problems - not just common LLM patterns, but guidelines for writing better - and you get rid of more.

      Properly prompted, an LLM writes far better than most people.

      • tadfisher 17 minutes ago
        That is precisely the problem. When writing technical documentation, such as the landing page for an FPGA inference engine, a model should not need to be prompted to use proper voice and to avoid marketing language. There should be enough context in the text of the prompt itself.
      • jdcasale 21 minutes ago
        Without weighing in on whether this is true, I'll point out that LLMs could both be better writers than most people and also be bad writers.

        Writing is a difficult skill that many (most?) educational systems do not effectively teach. Most people are terrible writers.

  • roughly 19 minutes ago
    If the author and/or anyone else hasn't seen Sidero's Talos Linux distro, it's my current favorite way to spin up a bare metal Kubernetes cluster:

    https://www.talos.dev/

  • noosphr 42 minutes ago
    > It isn't just a reimplementation of existing software logic in hardware; it is a rethinking of how deep learning inference should work at the circuit level. [...] By implementing the entire inference pipeline in SystemVerilog, we achieve deterministic, cycle-accurate control over every calculation. [...] But don’t let the two-week timeline fool you. Those were two weeks full of 18-hour days, fueled by caffeine and sheer stubbornness.

    I'm having a hard time figuring out if this is satire or not.

    • jcgrillo 4 minutes ago
      From personal experience caffeine is not enough for 2wk of 18hr days.. you need some pervitin type shit
  • arjvik 43 minutes ago
    Not to take away from this cool project, but its design decisions are incredibly impractical.
    • refulgentis 40 minutes ago
      I honestly can’t tell if it’s a cool project or just a md file someone with 0 experience had an LLM output.
  • refulgentis 41 minutes ago
    This is horrible LLM slop, my god.

    Winced my way through “Convolutions are in CNNs (it’s literally in the name, Convolutional Neural Network)”, then had to stop.

    It’s honestly offensive to me. It doesn’t even make sense on its own terms. For some reason we fly from LLM inferencing to toy MSINT to convolutions with __0__ transition or sense of structure.

    • fc417fc802 15 minutes ago
      Aside from the verbose AI slop it's an interesting hobby project for exploring FPGAs. But it doesn't do anything you can't do on CPU by using a model that's small enough to fit in cache. In terms of practical use you'd be better off implementing a minuscule model using vector intrinsics in your favorite systems language.