NHacker Next
login
▲Gluon: a GPU programming language based on the same compiler stack as Tritongithub.com
69 points by matt_d 10 hours ago | 19 comments
Loading comments...
lukax 10 hours ago [-]
Is this Triton's reply to NVIDIA's tilus[1]. Tilus is suposed to be lower level (e.g. you have control over registers). NVIDIA really does not want the CUDA ecosystem to move to Triton as Triton also supports AMD and other accelerators. So with Gluon you get access to lower level features and you can stay within Triton ecosystem.

[1] https://github.com/NVIDIA/tilus

bobmarleybiceps 14 minutes ago [-]
it feels like Nvidia has 30 "tile-based DSLs with python-like syntax for ML kernels" that are in the works lol. I think they are very worried about open source and portable alternatives to cuda.
mdaniel 9 hours ago [-]
Also it REALLY jams me up that this is a thing, complicating discussions: https://github.com/triton-inference-server/server
reasonableklout 9 hours ago [-]
It sounds like they share that goal. Gluon is a thing because the Triton team realized over the last few months that Blackwell is a significant departure from the Hopper, and achieving >80% SoL kernels is becoming intractable as the triton middle-end simply can't keep up.

Some more info in this issue: https://github.com/triton-lang/triton/issues/7392

saagarjha 7 hours ago [-]
I believe it’s the other way around; Gluon exposes the primitives Triton was built on top of.
some_guy_nobel 2 hours ago [-]
Amazon (+ Microsoft) already released a language for ML called gluon 8 years ago: https://aws.amazon.com/blogs/aws/introducing-gluon-a-new-lib...

autogluon is popular as well: https://github.com/autogluon/autogluon

xcodevn 3 hours ago [-]
Interesting, i can see this being very similar to Nvidia's CUTE DSL. This hints that we are converging to a (local) optimal design for Python-based DSL kernel programming.
ronsor 10 hours ago [-]
The fact that the "language" is still Python code which has to be traced in some way is a bit off-putting. It feels a bit hacky. I'd rather a separate compiler, honestly.
JonChesterfield 9 hours ago [-]
Mojo for python syntax without the ast walking decorator, cuda for c++ syntax over controlling the machine, ah hoc code generators writing mlir for data driven parametric approaches. The design space is filling out over time.
pizlonator 7 hours ago [-]
The fact that these are all add on syntaxes is strange. I have my ideas about why (like you want to write code that cooperates with host code).

Do any of y’all have clear ideas about why it is that way? Why not have a really great bespoke language?

saagarjha 7 hours ago [-]
Hard to beat trifecta of familiar language, same source files and toolchain, JIT compiled
pizlonator 6 hours ago [-]
That’s sort of what I assumed, yeah. And I think that makes sense.

But they end up adding super sophisticated concepts to the familiar language. Makes me wonder if the end result is actually better than having a bespoke language.

derbOac 9 hours ago [-]
Yeah that struck me as odd. It's more like a Python library or something.
zer0zzz 7 hours ago [-]
It’s a dsl not a library. The kernel launch parameters and the ast walk generate ir from the Python.
zer0zzz 7 hours ago [-]
This is pretty common among these ml toolchain, and not a big deal. They use pythons ast lib and the function annotations to implement an ast walker and code generator. It works quite well.
ivolimmen 8 hours ago [-]
Not to be confused with the Gluon UI toolkit for Java : https://gluonhq.com/products/javafx/
liuliu 5 hours ago [-]
Or the GluonCV by mxnet guys (ancient! https://github.com/dmlc/gluon-cv)
ericdotlee 8 hours ago [-]
Why is zog so popular these days? Seems really cool but I have yet to get the buzz / learn it.

Is there a big reason why Triton is considered a "failure"?

huevosabio 9 hours ago [-]
Not to be confused with gluon the embbedable language in Rust: https://github.com/gluon-lang/gluon