Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Learn computer graphics from scratch and for free (scratchapixel.com)
231 points by theusus 19 hours ago | hide | past | favorite | 26 comments




The website has come a long way, a good reminder for Santa to drop a donation.

Computer graphics needs more open education for sure. Traditional techniques are sealed in old books you have to go out of your way and find; Sergei Savchenko's "3D Graphics Programming Games and Beyond" is a good one. New techniques are often behind proprietary gates, with shallow papers and slides that only give a hint of how things may work. Graphics APIs, especially modern ones, make things more confusing than they need to be too. I think writing software rasterizers and ray tracers is a good starting point; forget GPUs exist.

Also, slight tangent, but there doesn't seem to be any contact method here other than Discord, which I find to be an immediate turn-off. Last time I checked, it required a phone number.

The donations page could use a link directly from the homepage too.


I can still remember a fellow student wanting to know how to write a 3D computer game, the professor being stumped, and my chiming in w/

>Get Foley & Van Dam from the library

noting it should be available to check out, since I'd just checked it back in.

Several new editions since:

https://www.goodreads.com/book/show/5257044-computer-graphic...


Yeah, that's "the mouse book" in my mind. The tiger book is also a very good compilation of topics, though it leaves things as "exercise for the reader" more often than I would like to.

https://www.goodreads.com/book/show/1933732.Fundamentals_of_...


This is gold people.

My username on here is after my (now older) game engine Reactor 3D.

I taught myself this stuff back when Quake 3 took over my high school. Doom got me into computers but Quake 3 got me into 3D. I didn’t quite understand the math in the books I bought but copied the code anyway.

Fast forward into my career and it’s been a pleasant blend of web and graphics. Now that WebGL/WebGPU is widely available. I taught PhD’s how to vertex pack and align and how to send structs to the GPU at my day job. I regret not continuing my studies and getting a PhD but I ended up writing Reactor 3D part time for XNA on Xbox 360 and then rewriting it half a decade later to be pure OpenGL. I still struggle with the advanced concepts but luckily there are others out there.

Fun fact, I worked with the guy who wrote XNA Silverlight, which would eventually be used as the basis for MonoGame, so I’m like MonoGame’s great grand uncle half removed or something. However,

Now that we have different ways of doing things, it demands a different kind of engine. So the Vulkan/Dx12/Metal way is the new jam.


I maintain (not much anymore) a list of free resources for graphics programming that some of you might find helpful. https://gist.github.com/notnotrobby/ceef71527b4f15869133ba7b...

Graphics have been a blind spot for me for pretty much my entire career. I more or less failed upward into where I am now (which ended up being a lot of data and distributed stuff). I do enjoy doing what I do and I think I'm reasonably good at it so it's hardly a "bad" thing, but I (like I think a lot of people here) got into programming because I wanted to make games.

Outside of playing with OpenGL as a teenager to make a planet orbit around a sun, a bad space invaders clone in Flash where you shoot a bird pooping on you, a really crappy Breakout clone with Racket, and the occasional experiments with Vulkan and Metal, I never really have fulfilled the dream of being the next John Carmack or Tim Sweeney.

Every time I try and learn Vulkan I end up getting confused and annoyed about how much code I need to write and give up. I suspect it's because I don't really understand the fundamentals well enough, and as a result jumping into Vulkan I end up metaphorically "drinking from a firehose". I certainly hope this doesn't happen, but if I manage to become unemployed again maybe that could be a good excuse to finally buckle down and try and learn this.


Try WebGL or better, WebGPU. It's so much easier and all the concepts you learn are applicable to other APIs.

https://webgpufundamentals.org

or

https://webgl2fundamentals.org

I'd choose webgpu over webgl2 as it more closely resembles current mondern graphics APIs like Metal, DirectX12, Vulkan.


I concur; just last month I started with `wgpu` (the Rust bindings for WebGPU) after exclusively using OpenGL (since 2000, I think? via Delphi 2). Feels a bit verbose at first (with all the pipelines/bindings setup), but once you have your first working example, it's smooth sailing from there. I kind of liked (discontinued) `glium`, but this is better.

Yeah you're not the first one to mention that to me. I'll probably try WebGPU or wgpu next time I decide to learn graphics. I'd probably have more fun with it than Vulkan.

I feel the same. I was trying to make some "art" with shaders.

I was inspired by Zbrush and Maya, but I don't think I can learn what is necessary to build even a small clone of these gigantic pieces of software, unless I work with this on a day to day basis.

The performance of Zbrush is so insane... it is mesmerizing. I don't think I can go deep into this while treading university.


> Every time I try and learn Vulkan I end up getting confused and annoyed about how much code I need to write and give up.

Vulkan isn't meant for beginners. It's a lot more verbose even if you know the fundamentals. Modern OpenGL would be good enough. If you have to use Vulkan, maybe use one of the libraries built on top of it (I use SDL3 for example). You still have freedom doing whatever you want with shaders and leave most of resource management to those libraries.


I have a hot take. Modern computer graphics is very complicated, and it's best to build up fundamentals rather than diving off the deep end into Vulkan, which is really geared at engine professionals who want to shave every last microsecond off their frame-times. Vulkan and D3D12 are great, they provide very fine-grained host-device synchronisation mechanisms that can be used to their maximum by seasoned engine programmers. At the same time a newbie can easily get bogged down by the sheer verbosity, and don't even get me started on how annoying the initial setup boilerplate is, which can be extremely daunting for someone just starting out.

GPUs expose a completely different programming memory model, and the issue I would say is conflating computer graphics with GPU programming. The two are obviously related, don't get me wrong, but they can and do diverge quite significantly at times. This is more true recently with the push towards GPGPU, where GPUs now combine several different coprocessors beyond just the shader cores, and can be programmed with something like a dozen different APIs.

I would instead suggest:

  1) Implement a CPU rasteriser, with just two stages: a primitive assembler, and a rasteriser.
  2) Implement a CPU ray tracer. 
Web links for tutorials respectively:

  https://haqr.eu/tinyrenderer/
  https://raytracing.github.io/books/RayTracingInOneWeekend.html
These can be extended in many, many ways that will keep you sufficiently occupied trying to maximise performance and features. In fact to even achieve some basic correctness will require quite a degree of complexity: the primitive assembler will of course need frustum- and back-face culling (and these will mean re-triangulating some primitives). The rasteriser will need z-buffering. The ray-tracer will need lighting, shadow, and camera intersection algorithms for different primitives, accounting for floating-point divergence; spheres, planes, and triangles can all be individually optimised.

Try adding various anti-aliasing algorithms to the rasteriser. Add shading; begin with flat, then extend to per-vertex to per-fragment. Try adding a tessellator where the level of detail is controlled by camera distance. Add in early discard instead of the usual z-buffering.

To the basic Whitted CPU ray tracer, add BRDFs; add microfacet theory, add subsurface scattering, caustics, photon mapping/light transport, and work towards a general global illumination implementation. Add denoising algorithms. And of course, implement and use acceleration data structures for faster intersection lookups; there are many.

Working on all of these will frankly give you a more detailed and intimate understanding of how GPUs work and why they have been developed a certain way, rather than programming with something like Vulkan, spending time filling in struct after struct.

After this, feel free to explore any one of the two more 'basic' graphics APIs: OpenGL 4.6, or D3D11. shadertoy.com and shaderacademy.com are great resources to understand fragment shaders. There are again several widespread shader languages, though most of industry uses HLSL. GLSL can be simpler, but HLSL is definitely more flexible.

At this point, explore more complicated scenarios: deferred rendering, pre- and post-processing for things like ambient occlusion, mirrors, temporal anti-aliasing, render-to-texture for lighting and shadows, etc. This is video-game focused; you could go another direction by exploring 2D UIs, text rendering, compositing, and more.

As for why I recommend starting with CPUs, only to end up back with GPUs again, and one may ask: 'hey, who uses CPUs any more for graphics?' Let me answer: WARP[1] and LLVMpipe[2] are both production-quality software rasterisers; frequently loaded during remote desktop sessions. In fact 'rasteriser' is an understatement: they expose full-fledged software implementations of D3D10/11 and OpenGL/Vulkan devices respectively. And naturally, most film renderers still run on the CPU, due to their improved floating-point precision; films can't really get away with the ephemeral smudging of video games. Also, CPU cores are quite cheap nowadays, so it's not unusual to see a render farm of a million+ cores chewing away at a complex Pixar or Dreamworks frame.

[1]: https://learn.microsoft.com/en-gb/windows/win32/direct3darti...

[2]: https://docs.mesa3d.org/drivers/llvmpipe.html


I would simplify further:

1) Implement 2D shapes and sprites with blits

With modern compute shaders, this has 95% of "How to use a GPU" while omitting 99% of the "Complicated 3D Graphics" that confuses everybody.


Vulkan isn't a graphics API, it's a low level GPU API. Graphics just happens to be one of the functions that GPUs can handle. That can help understand why Vulkan is the way it is.

I really enjoy the website content and appreciate the hard work to create it. Also, thank you to the author for taking action on the HN feedback last year about the AI thumbnails that used to be all over this site. [0]

[0] https://news.ycombinator.com/item?id=40622209


There's still an obnoxious slop image front-and-center, full of nonsense typos. Not a good look for any educational resource.

One of my goals this year is to write a basic software 3D renderer from first principles. No game engine, no GPU. I'm looking forward to it.

Good show, this is how I recommend doing it and have been teaching it for years.

It's quite unfortunate that basically everyone thinks 3D graphics necessarily implies rasterisation and using someone else's API, and I feel extremely lucky to have taught myself in a time when you could trivially display images by direct memory access (mode 13h), and to have focused on ray tracing instead of rasterisation.


OP's link is a good one, but if you want a different perspective (heh), there's https://gabrielgambetta.com/computer-graphics-from-scratch/i..., also from scratch, also for free. The name clash is unfortunate, I don't really know who started using it earlier :(

https://www.youtube.com/watch?v=qjWkNZ0SXfo

One Formula That Demystifies 3D Graphics



build-your-own-x*, a popular compilation of "well-written, step-by-step guides for re-creating our favorite technologies from scratch", contains some.

* https://github.com/codecrafters-io/build-your-own-x


You can now post a link of a website into an LLM and turn it into an interactive resource. I did this but with a 1000 page PDF today to help me learn more about game engines. Best way to do it if you don't want it to become another PDF / bookmark that is forgotten.

Which LLMs have a context window big enough to fit a 1000 page PDF?


Just in case NVidia stops having a monopoly of graphics APIs, and Google on the web, and AMD as the alternative that sucks and isn't maintained.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: