What is going on here is that modeling a mesh and rendering it are very different things. There are algorithms to edit a model that work best on quad based topology (e.g. selecting edge loops and others). That's why quads are preferable for editable "source" versions of 3d models. The conversion from quads to triangles is also a very fast algorithm.
GPUs rasterize triangles because they are a primitive type with very convenient properties: rasterization requires only a few basic math operations per pixel and triangles can never overlap themselves in projections, which quads can.
It's generally tempting to assume the mesh data that is edited in a DCC tool (Maya etc.) as the same as what is rendered. This couldn't be further from the truth because they need to be in completely different representations. For example, vertices on UV map seams need to be duplicated for rendering, but that would make editing the mesh extremely unwieldy. So the datastructures are totally different.
GPUs rasterize triangles because they are a primitive type with very convenient properties: rasterization requires only a few basic math operations per pixel and triangles can never overlap themselves in projections, which quads can.
It's generally tempting to assume the mesh data that is edited in a DCC tool (Maya etc.) as the same as what is rendered. This couldn't be further from the truth because they need to be in completely different representations. For example, vertices on UV map seams need to be duplicated for rendering, but that would make editing the mesh extremely unwieldy. So the datastructures are totally different.