by fanf2 on 9/19/2020, 11:49:43 AM
by lionkor on 9/19/2020, 2:36:59 PM
> Promising long-term ABI stability would prevent us from fixing mistakes and providing best in class performance. So, we make no such promises.
Wait NVidia actually get it? Neat!
by lars on 9/19/2020, 10:25:48 AM
It really is a tiny subset of the C++ standard library, but I'm happy to see they're continuing to expand it: https://nvidia.github.io/libcudacxx/api.html
by RcouF1uZ4gsC on 9/19/2020, 10:37:51 AM
For everyone wondering where are all the data structures and algorithms, vector and several algorithms are implemented by Thrust. https://docs.nvidia.com/cuda/thrust/index.html
Seems the big addition of the Libcu++ to Thrust would be synchronization.
by davvid on 9/19/2020, 6:10:22 PM
Here's a somewhat related talk from CppCon '19: "The One-Decade Task: Putting std::atomic in CUDA"
by jlebar on 9/19/2020, 4:37:43 PM
This is super-cool.
For those of us who can't adopt it right away, note that you can compile your cuda code with `--expt-relaxed-constexpr` and call any constexpr function from device code. That includes all the constexpr functions in the standard library!
This gets you quite a bit, but not e.g. std::atomic, which is one of the big things in here.
by BoppreH on 9/19/2020, 11:12:42 AM
Unfortunate name, "cu" it's the most well known slang for "anus" in Brazil (population: 200+ million). "Libcu++" is sure to cause snickering.
by einpoklum on 9/19/2020, 7:06:44 PM
1. How do we know what parts of the library are usable on CUDA devices, and which are only usable in host-side code?
2. How compatible is this with libstdc++ and/or libcu++, when used independently?
I'm somewhat suspicious of the presumption of us using NVIDIA's version of the standard library for our host-side work.
Finally, I'm not sure that, for device-side work, libc++ is a better base to start off of than, say, EASTL (which I used for my tuple class: https://github.com/eyalroz/cuda-kat/blob/master/src/kat/tupl... ).
...
partial self-answer to (1.): https://nvidia.github.io/libcudacxx/api.html apparently only a small bit of the library is actually implemented.
by Mr_lavos on 9/19/2020, 11:31:42 AM
Does this mean you can do operations on struct's that live on the GPU hardware?
by gj_78 on 9/19/2020, 12:59:07 PM
I really do not understand why a (very good) hardware provider is willing to create/direct/hint custom software for the users.
Isn't this exactly what a GPU firmware is expected to do ? Why do they need to run software in the same memory space as my mail reader ?
by scott31 on 9/19/2020, 10:31:39 AM
A pathetic attempt to lock developers into their hardware.
“Whenever a new major CUDA Compute Capability is released, the ABI is broken. A new NVIDIA C++ Standard Library ABI version is introduced and becomes the default and support for all older ABI versions is dropped.”
https://github.com/NVIDIA/libcudacxx/blob/main/docs/releases...