I am a developer. One of the programs I am developing uses OpenGL 3.2 for output.
I primarily develop on the Mac, but I also support Linux. So I have Ubuntu 18.04.1 installed in a virtual machine within Fusion 10.
I have two pipelines implemented in my code, relying on the same inputs — the same vertex arrays and the same sequence of calls. Just a different shader. Both work flawlessly on the Mac.
Over in Ubuntu-via-VMWare, in the first case everything works as it should. But in the second, VMWare announces that my machine has crashed and offers to restart it. Since the whole machine falls over, there's no core dump, and I'm not even entirely sure that everything I'm throwing at stdout is reaching me. Certainly the filing system sometimes starts up in a corrupt state, so writing there also isn't even necessarily a convincing option.
Is there any way to debug what's going on here? Any information I can get from VMWare Fusion, perhaps, that will allow me to introspect its OpenGL driver?
Alas, using software rendering doesn't seem to give me all the OpenGL features I need. Actually, that gives very strange behaviour — e.g. glCheckFramebufferStatus returns 0, which means "an error occurred" (other than the ordinary named ones) per the man page, but glGetError then also returns 0, which is the constant for "no error recorded".
So I'm at a bit of a loss here. I feel like VMWare Fusion probably shouldn't crash even if I've done something egregious, like trying to reference out-of-bounds elements in a vertex array, but given that it is I'd really like to know whether I've made an error.
Any ideas would be welcome.
EDIT: updates: somebody has tested this same code on a non-virtualised Linux without issue. So that possibly puts more suspicion on the Fusion 10 implementation of hardware OpenGL acceleration.