I have made the gradient triangle in OpenGL.
But because I'm me i had to do it special-like. I had to take an example that was in C++ using OpenGL 3.3 and GLFW, and implement in in D, with OpenGL ES 2.0 and SDLGL 2.0.
It took me like 3 days to get here BUT I AM HERE.
So, I hadn't fully realized there was kind of a major flaw in this. The hello triangle i was copying from used a GL 3.x VBO-VAO setup to draw the triangle. I'm using GLES 2.0 (and i want it to run on a raspi so no GL_OES_vertex_array_object extension either.
It wouldn't draw the triangle. So I ended up fighting and looking at other examples until I found one that used a system-memory vertex array. And that's cool, but I want to be able to draw straight from a VBO, and that wouldn't work no matter what I did.
And it didn't even work when I moved up to GLES 3.2 and put the VAO code back in.
I was at the point where I was starting to try and find if Mesa's LibGLESv2.so is somehow wonky. Until I tried another demo that did exactly what I wanted, and worked perfectly well when I compiled the C++ version.
It had a few differences. Only 2D coordinates for the triangle corners, different colors, no version specifiers on the shader code. But it still looked like it was doing the same thing as my not-working code, so I hand-translated it directly to D, this time making no unnecessary changes.
And it kinda worked. It gave me a squashed triangle. And when I tried my first little tweak, making the vertices 3 dimensional, no triangle at all.
But I could tweak the C++ version just fine! So i started to try and hammer out the language differences as much as I could. And changing my vertex array to a static array made it work perfectlly on D. so hey, Garbage Collector issue! I've banged my head on that a number of times, but nope, that wasn't it. Referencing the vertex array elsewhere didn't work and it shouldn't be needed anyway once it's been uploaded to the VBO!
But then i realized. The code to upload the vertices to the VBO? In C++ the length parameter was sizeof(array). I unthinkingly translated that to D's array.sizeof. Which is actually subtly different.
In D, .sizeof is a compile-time property. It's the size of the data type. And for a static array, that's the right answer in this case. But for a dynamic array? It's the size of a fat pointer.
The 2D coordinate version was working because on a 64-bit CPU, a fat pointer is exactly the size of four floats. The bottom two points of the triangle work fine, and OpenGL assumes the final point is 0,0. Meaning a squashed but visible triangle.
But in a 3D coordinate system, you get one good vertex and two junk ones. Degenerate triangle, nothing to draw.
LESSONS LEARNED. JUST BECAUSE YOU ARE TRAVELLING UNKNOWN TERRITORY DOES NOT MEAN YOU ARE IMMUNE TO MAKING MISTAKES ON YOUR WELL-KNOWN FUNDAMENTALS.

