Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Orbit Tessellation developer diary for Kerbal Space Program 2 (kerbalspaceprogram.com)
192 points by bmease on April 25, 2021 | hide | past | favorite | 82 comments


Space rendering is full of interesting problems like this. Another one is that single precision floating point doesn't have enough precision to represent both planet scale and human scale in the same coordinate system (let alone solar system scale or galaxy scale), yet GPUs don't support double precision well. So you have to make sure that you do calculations needing high precision on the CPU in double precision and only send the GPU coordinates it can handle, or your 3D models will get crunched by precision errors.

Another one is that a planet sphere renderer will often tesselate the sphere into quads in lat-lon space. Of course the quads are split into two triangles for rendering. However, at the poles, one of the triangles has zero area because two of its vertices are the same, the pole. Then when you texture map that "quad" with a square texture, half of the texture is not shown, and you get visible seams (Google Earth suffers from this artifact, or at least it did in the past). What's less obvious is that this problem is present to a lesser extent in every quad on the sphere, because the triangle with the horizontal edge nearer the pole is smaller than the other, so half of the texture is stretched and half is shrunk. The fix is to use homogeneous texture coordinates.


KSP's approach to the precision problem is surprisingly simple: whenever you get more than two kilometers away from the origin, move the entire universe two kilometers so that the location of your craft, and all the physics-relevant computations on it, have small numbers as coordinates.

(This fix was known as Krakensbane: it solved a bug known as the Deep-Space Kraken, which was essentially that floating-point physics inaccuracies would tear your ship apart when you got much further than the moon or so, before it was implemented.)


Pretty sure that the floating origin moves with the craft now every frame. The entire universe also orbits around the craft when it is below the inverse rotation altitude threshold, which means that PhysX is doing physics in the co-rotating reference frame. That means that the "First point of Ares" -- normally (1,0,0) -- rotates around the z-axis as the craft moves and you have to query Planitarium.right to determine what its current orientation is. That means that tick-to-tick coordinates change, which makes trajectory optimization hard because values in the future won't match at all. You have to remove that rotation to get something like actual inertial coordinates (after also removing the offset origin to the vessel as well).

They've also recently fixed issues with single precision calculation in KSP1 and used a double-precision QuaternionD.LookRotation in the maneuver nodes to keep interplanetary trajectories from hopping around a lot.

[ oh it also uses left handed coordinates, which is terrible which means (0,1,0) is the north pole and the left handed cross product gets used so dual vectors like angular momentum point south for east-going orbits -- except the Orbit class uses normal right handed vectors and when you forget to .xzy swizzle a vector for one reason or another you can wind up debugging an issue all day long ]


Ah, did not know that - last time I was working on KSP mods that interacted with that part of the code was in 0.19. Really caught me off guard when everything started to fly off into space when I tried to move ships past the end of the space center or so.


That's interesting. It doesn't sound that simple though, I imagine there are some gotchas with that. They are probably constrained by what Unity allows.

In my custom engine I did the world to camera transform for each object on the CPU in double precision, essentially making the camera the origin for all subsequent single precision computations on the GPU. That works for small objects and even large objects that are split into small parts, like a planet split into LOD terrain tiles. But it didn't work for orbit lines because they are a single object at planet scale that you can zoom in on to see at human scale (I didn't have an adaptive tesselation system like the one in the article).

It also wouldn't have worked for galaxy scale, where even double precision wouldn't be enough. I don't know exactly what Celestia and similar "entire universe" apps do. Emulated quad precision floating point?

Edit: I just realized that you are talking about precision issues with the physics engine, while I'm talking about precision issues in the rendering engine. Related but slightly different. Physics engines aren't constrained by GPUs and can use double precision throughout. But they often have stability issues even in normal circumstances so I can certainly imagine that solar system scales would be a problem even in double precision.


KSP's solution to the graphics problem is what they call "scaled space". Basically, nearby objects like your spaceship, the planet you're on, etc are rendered normally, but there is another copy of the solar system that's scaled down to 1/10 scale on another scene, and composited in behind everything. This is where things like orbit lines are drawn. It works well enough for KSP's solar system, which is rather small, but there are rendering issues with modpacks that add other solar systems far away from the sun; I suspect that since KSP2 is adding interstellar travel they are going to need to come up with another solution.


An interesting piece of industry-insider information, at least as far as I recall from the Sony pub regulars I knew back in the day: the colloquial term for the polar artefact you describe is the cat’s bumhole.


One way around this is cube-mapping. You construct a cube then normalize all the vertices to "over-inflate" it until it's a sphere. Then you have six textures mapped to the faces of the cube, and no cat's bumhole, and no international date-line zipper. If you subdivide the faces of the cube into rectangles via equal angles instead of naively into a grid, then cut each of the rectangles into two triangles by the shortest diagonal, then you get a very nice tessellation with triangles that are nice and fat (nearly equilateral) with no thin sliver triangles and they're all of approximately equal size. The other thing you can do is double the vertices along the cube face edges, so that no two cube faces share any vertices. In that way you can avoid some "hairy ball theorem" related problems when it comes time to do normal mapping, which involves constructing tangent and bitangent vectors for each vertex. Each face of the cube can have a field of tangents and bitangents with no discontinuities, and since no texture or normal map crosses any boundary between faces, you avoid problems that come up with such discontinuities. (Hairy ball theorem says it's impossible to construct a field of tangents and bitangents covering a sphere that does not contain some discontinuity, so one solution is to stuff the discontinuities between cube map faces where they cannot bother any texture or normal map.)

Tessellated sphere looks like this: https://imgur.com/l3GmWq3


KSP does indeed use tessellated spheres. This solves texture mapping issues. however, there are still some weird camera and physics bugs due to certain systems using a longitude/latitude system for planets.


Creating astronomy visualizations often involves all sorts of fun tricks. I work in a planetarium as a 3D animator and each day has interesting unique challenges. Indeed we have to remain mindful of not pushing Maya too hard when it comes to series of scale.

We use fluids as a way to create 3D nebulae that can be flown through. https://thefulldomeblog.com/2013/08/20/the-nebula-challenge/

Or if you constrain the fluid into a sphere, then you have a dynamic volumetric sun. https://thefulldomeblog.com/2013/07/30/customizing-a-close-u...

When needing to fly through a star field, relying on particle sprites is an easy way to quickly render thousands of stars. https://thefulldomeblog.com/2013/07/03/creating-a-star-field...

Background stars are achieved by point-constraining a poly sphere to the camera. Having a poly sphere allows for easy manipulation to create realistic diurnal motion. https://thefulldomeblog.com/2013/11/13/background-stars-v2/

Flying through a galaxy field can be achieved with loads of galaxy images mapped to poly planes. For galaxies that are seen edge on, we sometimes add more detail by emitting fluid from the image colors. https://thefulldomeblog.com/2013/07/16/flying-through-a-gala...

Simulating the bands of Jupiter is tricky but I've done some experiments with 2D fluids. https://thefulldomeblog.com/2014/01/30/jupiter-bands-simulat...

And of course since the visuals are rendered for a planetarium dome, we gotta render using a fisheye camera. These days all render engines support fisheye, but 10 years ago it was a different story. https://thefulldomeblog.com/2019/09/07/exploring-render-engi... https://thefulldomeblog.com/2013/06/28/fisheye-lens-shader-o... https://thefulldomeblog.com/2013/07/23/stitching-hemicube-re...


I'm sure you've heard this before, but have you checked out Space Engine[0]? It has some pretty advanced features, like path animations and cubemap rendering. I'm not sure how well it'd integrate into existing workflows, but I've used it for creating high dynamic range skyboxes for spacecraft renders.

[0]: http://spaceengine.org/


> Simulating the bands of Jupiter is tricky but I've done some experiments with 2D fluids.

Nice! I've always wanted to do some fluid dynamics on the surface of a sphere, but the math is too hard for me. I found a video on youtube where someone has done some interesting things a few years ago: https://www.youtube.com/watch?v=Lzagndcx8go&t=1s but there's very little information about it. Then there was what was done for the film 2010: The year we make contact" http://2010odysseyarchive.blogspot.com/2014/12/

I've had to resort to simpler means myself, which means faking it. I use OpenSimplex noise on the surface of a sphere, and from this I can find the gradient of the noise field tangent to the surface of the sphere, rotate this vector 90 degrees about an axis passing through the center of the sphere -- which is equivalent to some kind of spherical curl of the noise field -- which gives me a non-divergent velocity field. Because incompressible fluid flows are also non-divergent, there's a strong but superficial resemblance -- it looks like fluid flow, even though it is just an arbitrary process. Into this field, I dump a bunch of colored particles and let them flow around, painting alpha blended, slowly fading trails behind them onto the surface of a cube to be used later as textures of a cubemapped sphere.

For the bands, I superimpose a simple velocity field of counter rotating bands on top of this curl-noise generated velocity field. Something like: horizontal_velocity += K x sin(5 x latitude)

Results looks like this: https://duckduckgo.com/?q=gaseous-giganticus&t=h_&iax=images...

The idea for using the curl of a noise field to mimic fluid dynamics is from a paper by Robert Bridson, et al.: https://www.cs.ubc.ca/~rbridson/docs/bridson-siggraph2007-cu...

This program is open source, it's here: https://github.com/smcameron/gaseous-giganticus


> yet GPUs don't support double precision well.

Are there GPUs without FP64 functionality at all? Or are you just referring to most consumer GPUs being built for FP32 performance over FP64?


It's not that GPUs don't support fp64, it's that for domestic gamer GPUs fp64 arithmetic is normally ~1:32 performance of fp32 arithmetic.

e.g. 1080gtx

    FP16 (half) performance
    138.6 GFLOPS (1:64)
    FP32 (float) performance
    8.873 TFLOPS
    FP64 (double) performance
    277.3 GFLOPS (1:32)

e.g. 3090rtx

    FP16 (half) performance
    35.58 TFLOPS (1:1)
    FP32 (float) performance
    35.58 TFLOPS
    FP64 (double) performance
    556.0 GFLOPS (1:64)

Only generally 'tesla' class cards targeted at super computers have a 1:2 ratio (e.g. v100, A100, Titan V). Note, I believe Titan V is the only Titan series GPU with good double performance, as the Volta architecture was never available to Geforce GPUs.

https://www.techpowerup.com/gpu-specs/geforce-gtx-1080.c2839

https://www.techpowerup.com/gpu-specs/geforce-rtx-3090.c3622

https://www.techpowerup.com/gpu-specs/tesla-v100-sxm3-32-gb....

https://www.techpowerup.com/gpu-specs/a100-sxm4-80-gb.c3746

https://www.techpowerup.com/gpu-specs/titan-v.c3051


For AMD GPUs the FP64/FP32 performance ratio is twice as high compared to nVidia, it’s 1:16

https://en.wikipedia.org/wiki/Radeon_RX_5000_series#Desktop

https://en.wikipedia.org/wiki/Radeon_RX_6000_series#Desktop


There are many GPUs with no double precision floating point support in hardware. Modern ones can probably emulate it. Older ones don't have any explicit support. Most real time 3D renderers do not use double precision at all.


> There are many GPUs with no double precision floating point support in hardware.

Which ones? I'm genuinely curious about this.


I think all of the ones listed as DirectX 10.1 or lower on this page: https://en.wikipedia.org/wiki/Feature_levels_in_Direct3D

Those are all quite old now of course, but even modern GPUs may not have 64-bit ALUs and rely on emulation instead. Intel Gen11 in Ice Lake, for example: https://01.org/sites/default/files/documentation/intel-gfx-p...

In mobile architectures 64-bit ALUs can be an optional feature that is omitted for lower end configurations. I know this is true of PowerVR.


Ah interesting to know about Gen11, thanks for the links.


    Some napkin/WolframAlpha math:
    if you wanted to use simple x,y,z coordinates,
    with the sun at the center
    and be able to represent locations at 30 AU (Neptune)
    with an accuracy of 1mm, e.g. 30AU vs 30.000...0001AU
    you'd need ~16 decimal digits of precision
    which is the same number of bits as a double (FP64)
    of course there's better ways to do this,
    in the surrounding smarter comments


> Another one is that single precision floating point doesn't have enough precision to represent both planet scale and human scale in the same coordinate system (let alone solar system scale or galaxy scale), yet GPUs don't support double precision well.

Could you do double-single computations on a GPU? (By that I mean something like double-double arithmetic, only with two singles instead of two doubles.)


Sure, with a performance penalty. Some GPUs do support double precision, though again at a performance penalty. You might run into issues with the fixed function parts of the GPU being single precision even on GPUs that support double precision types in shaders, depending on how you do things. Also, shading languages probably don't have nice libraries for double single precision so you'd have to roll your own, and probably without niceties like operator overloading.

It's not required for space rendering, as once everything is in camera coordinates you no longer have any visible precision issues (as long as you are using a float reverse Z buffer). You just have to be careful to make sure your world to camera transforms are done on the CPU, and you break large objects like planets into small LOD chunks with their own local coordinate systems, which you need to do anyway.


If I recall correctly, on a CPU, the penalty of doing "double-X" is something like 1:7 or so compared to just doing X. On most consumer GPUs, the penalty of doing doubles instead of singles would be a 1:24 or 1:32 these days, wouldn't it? So there should still be a fourfold speedup or so. Mixed binary operations with one single and one double-single should be cheaper, whenever applicable.

As for "rolling your own", this is a compiler transformation, effectively. So it may depend on your workflow whether it's painful or not.


Can’t we use an icosahedron for a better sphere?


Yes, and there are many other tessellations you can use, if you texture it with a single texture. But you can't texture a whole planet with one texture if you allow zooming from planet to human scale, the texture would be terabytes. You must tile the sphere with textures that are loaded on demand, and textures are square, so it makes sense to divide the sphere into quads (actually a quadtree for zooming).


If you want to texture a sphere, the cubesphere is the best way to draw. It is what you get when you subdivide a cube using slerp instead of lerp for interpolation.

Icosahedron doesn't work well if you have textures. The triangle topology near the poles lead to bad texturing.

Icosahedron may work better if you have a fully procedural pipeline and don't need to worry about square textures.


One of the things KSP taught me is that orbital mechanics aren't that complicated, but even so, pretty much all movies and TV shows about space were written by someone who doesn't even know the basics.


Expanse season 1 was a nice exception to the rule, but I find that as the longer the show goes on the more it gets ignored. Missiles that fly into the sun at faster than light speeds...


The expanse still gets the physics right for the important scenes. Watching people stand for over a few days looking at a torpedo slamming into the sun would be boring.

Space combat physics is still done very well.


That sequence was indeed not realistic. It's interesting to keep in mind that according to Expanse-lore, the Epstein drive on human-rated vessels operates typically around 11G. It is likely that a missle that is much lighter, so the rocket equation has less teeth, and not constrained by fragile meat-bag-physics, would be even more powerfull.

But if we just assume 11G, and constant acceleration (remember, specific impulse is supposed to be 1,100,000 seconds), that gets you to 0.05c in 37 hours. At that point, 8 lightminutes (1 AU) takes for example 160 minutes to travel. Not instantanious by any means, but a lot faster than one would imagine.


I’d argue that it’s just an effect of lazy writing though. Deorbiting anything into the sun is a very expensive way to get rid of it. They showed instant proto molecule cleanup operations in another season - just use that again.


They didn’t want to risk going after it, it’s not written differently in the books.

It was done so the audience can understand what happens easily without having to drag it out over a few episodes.

The shows main strength is that they know rather well when physics are important for the plot and when it isn’t and they execute on this very well.


The vanilla KSP physics aren't realistic, only the nearest/most influential body exerts gravity on the spaceship. With the Principia mod[0], more realistic and complex maneuvers[1] can be simulated.

"Orbit Type Diagrams"[3][4] show the fractal-like complexity of three or n-body problems[5][6]

[0] https://github.com/mockingbirdnest/Principia

[1] https://www.youtube.com/watch?v=l3PCCJZzVvg

[3] https://www.semanticscholar.org/paper/Crash-test-for-the-res...

[4] https://www.semanticscholar.org/paper/Crash-test-for-the-Cop...

[5] https://en.wikipedia.org/wiki/Three-body_problem

[6] https://en.wikipedia.org/wiki/N-body_problem


KSP physics may not be realistic, but they should be close enough for most movies and TV shows, yet they don't even approach that. I think that's the point of the parent.


The problem is that if you show a ship heading away from a planet by burning parallel to it, or heading to the planet in the same way, people watching will be confused.


Often, artistic license take over actual physics, even when the writer knows about the field, he will prefer do it in a way that fits the plot.

I mean, even Interstellar, with a Nobel Prize on board sometimes forgoes scientific accuracy for nicer pictures.

There is also a game about accuracy, viewer expectations, and attention. For example, most people will thing that the best way to land from orbit is to point the ship towards the ground is fire the thrusters, obvious right. If the ship points 90 degrees away, people will ask themselves why. If orbital mechanics is central to your movie, that's good, but you may have some explaining to do. If you are in the middle of an epic space battle, it is not the time for a physics lesson, so go for the obvious (and wrong) and let the viewer focus on the action.


They are still making a work of fiction and entertainment, so some shortcuts are taken. Series like Star Trek just come up with new physics and word salad to progress the plot and pull out their deus ex machina.


I used a very similar approach to render airplane contrails in a flight sim back in the late 90s. The contrail had a binary tree representing the volume of space taken up by segments of the trail. Projecting this volume into screen space and comparing against a heuristic resulted in either rendering the segment or recursing down the tree for finer detail.


KSP 2 has easily been my most anticipated game for a while now. It's a bummer that there's still a while to wait, but I'm hoping that it's worth it.


You can give Simple Rockets 2 a try in the meantime. It uses fully customizable procedural parts (wings, fuel tanks, even engines) instead of premade parts like KSP, it looks a bit nicer and it includes a visual programming environment for your spacecraft (e.g. you can build a SpaceX-like autoland routine).


I've never tried Simple Rocket, could you compare it to KSP? At a glance, it seems more serious than KSP while KSP has a "fun" edge, but besides that they seem more similar than not.


SR2 is a little bit more serious than KSP, I'd say, but it's no Orbiter either. They use a smaller-size solar system (bigger than KSP though) by default (you can swap it out for a community created realistic system in-game), so you can still brute-force to orbit or build a SSTO and it will kinda work.

What I love about SR2 is the aerodynamics and the procedural parts. You can create an airplane and design its wings and control surfaces by hand, you can play with the center of gravity, custom fit it with a jet engine (by tweaking the compression and bypass ratios, adding or omitting an afterburner) and just fly some aerobatics.

You can use electric powered rotators and hinges, powered procedural wheels for cars and other cool stuff. And if everything else fails, your rocket is saved is a plain and simple XML file, so it's easy to just dig in and tweak some more.

On the other hand, there is no campaign yet and only a bunch of tutorial-like-tasks to get you started (take off, reach speed X, achieve orbit, hit the Moon with your spaceship etc).

I wouldn't say that either game is better, SR2 is just different and provides you with some more freedom and options than vanilla KSP.

Also KSP is Windows + Mac + Linux + consoles, while SR2 is Windows + Mac + iOS + Android, and the mobile versions are surprisingly playable (especially on tablets or larger phones).


That's great, sounds interesting, especially the airplane parts, always one of the best parts in KSP for me to build space-planes. Thanks for taking the time to write such a elaborate reply!


Same, so far they've mainly showed a lot of visual improvements, but I really hope the technical groundwork is being done as well.


My biggest problem with trajectories in KSP was that they flip non-continuously once the trajectory passes through a sphere of influence. I believe it would be much clearer and easier if trajectories were continuously changing but just marked once they pass through a SoI.


If I'm correctly understanding what you mean, I'm fairly sure that's a deliberate optimization/design choice.

In the real world, a spacecraft or other object in orbit around Earth is also being constantly influenced by other celestial bodies, especially the sun and moon. Over short timescales this causes the spacecraft's orbital parameters to slowly drift; over longer timescales, it means the long-term position and fate of an object is chaotic and unpredictable. The behavior near the "boundary" between two spheres of influence is just a situation where these perturbations are more noticeable.

KSP only implements two-body physics, so a spacecraft is only affected by the gravity of one celestial body at any given time. This allows you to put something in orbit and know that it will stay there without you needing to constantly check on it and perform stationkeeping.

It's also the key simplification that makes "time warp" possible, since two-body orbits have closed-form solutions. To implement time warp with many-body physics, you would need to either keep the integration step size the same and drastically increase the amount of computation, or increase the step size and suffer from extreme inaccuracy, causing objects to crash or fly off into space.


> It's also the key simplification that makes "time warp" possible, since two-body orbits have closed-form solutions. To implement time warp with many-body physics, you would need to either keep the integration step size the same and drastically increase the amount of computation, or increase the step size and suffer from extreme inaccuracy, causing objects to crash or fly off into space.

It makes things nicer at extremely high time warps, but it's not necessary. It's not like you need to update orbits nearly as often as part physics. The max time warp is 100000x, and at that speed if you updated orbits every 10 game seconds that would only be 400 calculations per tick, per craft. So without that simplification you might need a smaller cap on satellite swarms, or a max speed of 10000x, but time warp would still be well inside the realm of "possible".

Edit: You could probably get processor use really low by using an exact curve for the most influential object and a very slowly updated offset for other influences.


> It's not like you need to update orbits nearly as often as part physics. The max time warp is 100000x, and at that speed if you updated orbits every 10 game seconds that would only be 400 calculations per tick, per craft. So without that simplification you might need a smaller cap on satellite swarms, or a max speed of 10000x, but time warp would still be well inside the realm of "possible".

KSP's most well-known n-body physics mod does precisely this. 1 integration step is performed every frame by default, but during warp an integration step is performed every 10 in-game seconds for vessels and every 35 minutes for bodies [0].

> You could probably get processor use really low by using an exact curve for the most influential object and a very slowly updated offset for other influences.

The Keplerian curve no longer applies once you introduce additional influences, though, so it doesn't really matter how (in)frequently updates are applied for those other influences.

The approximation wouldn't be very good farther away from a body as well, as the difference in effect between the "most influential" and less-influential bodies would be smaller.

[0]: https://github.com/mockingbirdnest/Principia/issues/2247#iss...


> The Keplerian curve no longer applies once you introduce additional influences, though, so it doesn't really matter how (in)frequently updates are applied for those other influences.

It depends on how well you can simplify the math. I would imagine that instead of an n body calculation it's lot simpler to calculate the influence from one body plus one unchanging vector, but I've never tried it.

> The approximation wouldn't be very good farther away from a body as well, as the difference in effect between the "most influential" and less-influential bodies would be smaller.

Not a problem because if you're not close to anything then your orbit won't be chaotic.


> I would imagine that instead of an n body calculation it's lot simpler to calculate the influence from one body plus one unchanging vector, but I've never tried it.

I think what you describe could either be Euler's three-body problem (two fixed point masses and a particle) [0], or the restricted three-body problem (two point masses and a particle) in a rotating/pulsating reference frame. The former does have exact solutions, and I don't believe the latter does, though I'm admittedly not familiar with the literature. I'm also not sure how easy/hard it is to evaluate the exact solution, and how the difficulty compares to proper n-body integration.

That being said, I think using Euler's three-body problem would result in losing some potentially useful n-body effects. For example, centrifugal/centripetal forces would be missing compared to a restricted three-body problem in a rotating reference frame, so Lagrange points might not be present. There might be other effects I'm not aware of as well.

> Not a problem because if you're not close to anything then your orbit won't be chaotic.

I'm not sure I understand why being farther away from something would result in less chaotic trajectories? If anything, I'd expect more interesting orbits due to the lack of one dominating influence.

[0]: https://en.wikipedia.org/wiki/Euler%27s_three-body_problem


> I think what you describe could either be Euler's three-body problem (two fixed point masses and a particle) [0], or the restricted three-body problem (two point masses and a particle) in a rotating/pulsating reference frame.

Even simpler, though, because only one of the masses needs to have a location. The other one is effectively at a fixed direction and distance, far enough away that you can ignore relative motion.

> I'm not sure I understand why being farther away from something would result in less chaotic trajectories? If anything, I'd expect more interesting orbits due to the lack of one dominating influence.

I'll rephrase. The gravitational vector on the craft won't be shifting very fast, so you can get away with a quite big timestep.


> Even simpler, though, because only one of the masses needs to have a location. The other one is effectively at a fixed direction and distance, far enough away that you can ignore relative motion.

This still sounds like precisely what I described. Euler's three-body problem is two fixed point masses, so there's no relative motion by construction, and the restricted three-body problem in a rotating/pulsating reference frame has mathematical transformations applied so the two bodies are "effectively fixed" relative to each other in that reference frame (while preserving effects due to rotations, such as centripetal/centrifugal forces).

The question in such a case becomes whether such a thing is substantially better than a regular n-body integrator. Euler's three-body problem may lose some useful n-body effects such as Lagrange points, which partially defeats the purpose of moving away from Keplerian orbits, and the restricted three-body problem arguably isn't simplified enough compared to full-blown n-body integration to be worth it.

> I'll rephrase. The gravitational vector on the craft won't be shifting very fast, so you can get away with a quite big timestep.

Ah, that makes more sense. IIRC Principia has an adaptive timestep, so it already does that, though that's with full n-body calculations. Swapping between the approximation and proper n-body depending on position relative to other bodies seems like a rather complex scheme, though, and I'm not sure whether that'd be the best approach.


> This still sounds like precisely what I described. Euler's three-body problem is two fixed point masses, so there's no relative motion by construction, and the restricted three-body problem in a rotating/pulsating reference frame has mathematical transformations applied so the two bodies are "effectively fixed" relative to each other in that reference frame (while preserving effects due to rotations, such as centripetal/centrifugal forces).

You still have to care about the where the particle is relative to both masses. The whole point of the calculation is figuring out which way the particle goes, and the simplification I'm suggesting removes a lot of that math. Instead of two masses providing a continuously varying force in both direction and magnitude, you have one mass providing a continuously varying force plus a static offset. This removes multiple degrees of freedom from the problem.

> Swapping between the approximation and proper n-body depending on position relative to other bodies seems like a rather complex scheme, though, and I'm not sure whether that'd be the best approach.

I wasn't suggesting swapping between the methods. If you're in the middle of nowhere, then while the Keplerian portion of the model will be a smaller factor, it won't harm anything.


> Instead of two masses providing a continuously varying force in both direction and magnitude, you have one mass providing a continuously varying force plus a static offset.

Ah, so the force vector is constant/infrequently updated, not the position of the second body. My apologies for the misunderstanding.

I'm honestly a bit curious what an that would look like. For example, what would an orbit around the Earth-Moon L1 look like? What would an Earth -> Moon low-energy transfer look like?

I feel like depending on the system you might need to update the "fixed" force vector relatively frequently to get anywhere close to approximating n-body results, which basically sounds like regular integration.

I suppose at some point the question becomes how much fidelity are you willing to sacrifice in the name of decreasing CPU usage.

> I wasn't suggesting swapping between the methods.

My mistake again. Sorry about that.

> If you're in the middle of nowhere, then while the Keplerian portion of the model will be a smaller factor, it won't harm anything.

Wouldn't that arguably be where the most significant errors would be, as that's where the relatively unphysical constant force vector would have the most significant influence?


> Ah, so the force vector is constant/infrequently updated, not the position of the second body. My apologies for the misunderstandin

That's fine, glad we got it cleared up.

> I'm honestly a bit curious what an that would look like. For example, what would an orbit around the Earth-Moon L1 look like? What would an Earth -> Moon low-energy transfer look like?

> I feel like depending on the system you might need to update the "fixed" force vector relatively frequently to get anywhere close to approximating n-body results, which basically sounds like regular integration.

For a basic example-numbers implementation, replacing an n-body simulation that updates every 10 seconds, I was imagining that you might update the force vector 1/100th as often, every 1000 seconds. That's plenty fast to accurately handle a multi-day orbit around a lagrange point or a low energy transfer. The paths the craft take should look completely normal.

If you then combine a normal single-influence orbit with that force vector, you could summarize 1000 seconds of orbit into one moderately complex equation. So instead of doing a moderately complex calculation every 10 seconds, you'd do two of them every 1000 seconds, an estimated 50x CPU savings.

If you're not very close to anything, there's no benefit over just running an n-body calculation every 1000 seconds. But the hard-to-handle case is orbits that are low enough to need rapid updates, but high enough that it's noticeably wrong to use an ellipse. And it's easy to end up with a lot of things in those orbits.

> Wouldn't that arguably be where the most significant errors would be, as that's where the relatively unphysical constant force vector would have the most significant influence?

Treating the forces as constant over a stretch of time, when they actually are almost constant, shouldn't have all that much error, unless I'm missing something glaring.

As you get further and further away from the most influential mass, this system gets closer and closer to simply being an n-body simulation with a timestep of how often you update the force vector.


> For a basic example-numbers implementation, replacing an n-body simulation that updates every 10 seconds, I was imagining that you might update the force vector 1/100th as often, every 1000 seconds. That's plenty fast to accurately handle a multi-day orbit around a lagrange point or a low energy transfer. The paths the craft take should look completely normal.

I'm honestly curious about what Principia's largest step size is when calculating predictions. Best I can tell, the step size starts large and shrinks until the tolerance-to-error ratio is small enough [0]. I can't seem to figure out how large the initial step is, though.

But in any case, I suppose it'd come down (again) to how important accuracy is.

I'm now extremely tempted to fire up KSP with Principia to see what happens if I were to mess with the timesteps. Don't think I'm familiar enough with the codebase to mess around with it properly, though.

> If you then combine a normal single-influence orbit with that force vector, you could summarize 1000 seconds of orbit into one moderately complex equation.

A lot hinges on the combination being as simple as the sentence makes it look. I'm not entirely convinced that the analysis is straightforwards (fewer forces, but you might lose some useful properties) but I'll be the first to admit that I'm not exactly an expert on this subject.

I really wish I had more time and knowledge; this sounds like a good candidate for some test code. I have absolutely no faith I'd be able to pull off something proper, though; good n-body integrators are well out of my skill range, and I don't know how I'd even begin approaching your proposed scheme outside naive integration (which wouldn't exactly be a fair comparison to high-quality n-body integrators).

Did you have a particular method of combination in mind?

> Treating the forces as constant over a stretch of time, when they actually are almost constant, shouldn't have all that much error, unless I'm missing something glaring.

Well, maybe; I'm honestly not confident enough in the possible models I had in mind to stand behind what I said (I was thinking in terms of the relative magnitude of the "correction" of the second force, but looking back I'm not entirely sure how relevant that is). I really shouldn't have been so confident in that particular line of questioning. Sorry about that.

[0]: https://github.com/mockingbirdnest/Principia/blob/f84c96953a...


I don't know what a good force integration looks like in the first place, so I can't really help design the needed calculation, sorry. But I imagine you'd use similar techniques in many ways to get good accuracy.


I found the Ahmad-Cohen scheme [0, PDF] (through [1]), which seems related to what you describe:

> Our scheme takes advantage of this fact by dividing the force on a particle into two parts: a slowly varying part which is due to the "distant" starts, the regular force, and another component, the irregular force, due to the stars in the immediate neighborhood of the star in question.

[2] has a bit more performance info:

> The gain by using by using the Ahmad-Cohen scheme is expressed as (N/3.8)^(1/4) for both the fourth-order standard and Hermite schemes, but would be significantly smaller on vector or parallel machines.

I think Principia currently has info for 34 bodies, so I think the improvement would be on the order of 1.73x according to that equation if the Ahmad-Cohen scheme is implemented (and is applicable).

That being said, I believe it still involves integration, as opposed to the hypothetical closed-form solution in your scheme. If no useful closed-form solution exists for the scheme you describe, then the Ahmad-Cohen scheme might be a closer match.

I admit my initial skepticism was at least partially incorrect. You were right that approximating less substantial influences by increasing the timestep could be useful for performance. The sole remaining question is whether you still need to stick with an integration scheme or whether a closed-form approximation exists.

[0]: https://courses.physics.ucsd.edu/2017/Winter/physics141/Lect...

[1]: https://scicomp.stackexchange.com/questions/21949/n-body-sim...

[2]: http://articles.adsabs.harvard.edu//full/1992PASJ...44..141M...


I found Principia’s code to be quite nice to read too; I recommend it next time you have a lazy Sunday afternoon.


When you do that, you have to start specifying what reference-point you're looking from. And then regardless of what you choose, you get these awesomely-weird spiraling / spirograph-ing orbits in some cases, e.g. like Principia shows https://www.youtube.com/watch?v=eU-kLLeE7n0

The non-continuous flip lets them keep orbits exclusively centered around the you're-most-likely-orbiting-this thing, which makes them all look "normal" and the same. Though I would like to be able to see either option in the stock game - they both have their uses.


There's I think 5 different ways to handle rendering of orbits once the vessel goes through an SOI that KSP supports.


If you mean the seam between Surface and Orbit frames of reference, then it's unavoidable. One measures velocity with the surface velocity vector added and one doesn't. The surface velocity vector is the linear velocity of ground due to planet rotation around it's own axis


My biggest question: who the hell plays KSP without the orbit map interface?


Getting to the Mun manually isn't too hard once you know what you're doing. Get to a low orbit, then thrust prograde when the Mun comes over the horizon. Keep an eye on your remaining delta-v (or do the math beforehand and watch your stopwatch) and you should come pretty close. Make sure your Munar orbit is counterclockwise and returning is just prograde thrust again.

For a more immersive experience, install the RasterPropMonitor mod to get interactable IVA displays. Add a compatible camera mod and Docking Port Alignment Indicator and you can even do a multi-craft Apollo style mission entirely from the cockpit.


Actually it can be a fun exercise flying purely in IVA with a slide rule and window- or periscope-based reticles, just like those on Apollo Lunar Module [1][2], or Russian space stations [3]. The Apollo descent and landing procedure included elaborate roll maneuvers to confirm everything visually with the landing point designator. You can measure the horizon curvature and use similar techniques to guess your altitude and attitude.

[1] https://apollo11space.com/apollo-11-windows/

[2] https://www.hq.nasa.gov/alsj/coas.htm

[3] (yes, ISS has an optical viewfinder called VShTV - it's installed in Zarya module. It was meant for emergencies and never been used to reorient the module manually, AFAIK)


I've done this for fun to see if I could get to the Mun only looking out the window in the crew cabin. (I succeeded!)


That’s what I was imagining but I would have thought it impossible. Very impressive!


Thanks! It requires some planning.

It's been ~7-8 years, but I guess I probably figured out what trans-Munar injection velocity I needed and roughly what phase angle I wanted the Mun at (these are some basic astrodynamical calculations). You can determine velocity precisely from instruments and phase angle well enough from visuals that you pretty much guarantee a rendezvous.

Once you enter the Munar sphere of influence, just wait until periapsis and start killing velocity. You'll want a table prepared (somewhat conservatively, to give yourself room for fine-tuning) of what your velocity should be at a given altitude to make sure that you land at 0 m/s without using too much propellant, and remember to adjust the "sea level" altitudes for ground level before your radar altimeter kicks in. I guess the hard part is making sure you land on something flat, but you can impart some lateral impulse if necessary until you see a good landing spot below.

Getting back is similar, but the calculations are a bit simpler.


Here’s a recording of another guy doing it to “Ride of the Valkyries”: https://youtu.be/iXYGo9KO2nI


Apollo 8 proved it possible, actually. ;) The onboard sextant worked perfectly.


I watched a streamer play KSP for the first time, not have the patience to learn orbits (at least on stream), just point at the Mun and go for it.

I was yelling at the screen that it wouldn't work, but somehow he made it there, though with the same delta-v he could have gone to Eve.


You can get far in KSP by adding more struts or delta-v, lol.


Believe in the tutorial it teaches you that you can do that (or at least it did). I suspect the orbit of the Mun is not placed by accident.


Presumably the folks that make airplanes instead of rockets?


Or that one guy who only makes trains.


...trains? That's news to me lol.

Guess that in a game about space, the only limit is your imagination.


Some people start fires just to watch them burn.


With enough boosters you can get to the Mun in a straight line :D


I'm a bit puzzled here, after seeing the word "screen-space". I'm no game dev, but I know that there's tessellation shader for this very purpose. We can let the shader churn out more points on-demand in a separate step in the rendering pipeline. I think doing this in "screen-space" is a bit unnatural. (Just nitpicking.)


"Screen space" refers to the coordinate system. Regardless of whether you were doing tessellation on the CPU or the GPU, you would want to use screen coordinates to make your decisions about which segments to subdivide, because what you care about is the deviation of the rendered curve from its "ideal" path. A large error in world coordinates doesn't matter if it's so far from the viewpoint that it looks tiny.

Tessellation shaders are useful for processing polygonal geometry with many thousands of polygons, but they have a fairly constrained programming model. And as you can see from the example images, rendering a high-quality orbit path only requires a few dozen vertices. The performance benefit from moving that computation into a tessellation shader is likely to be insignificant compared to the additional complexity and overhead.


Thanks for the reply. I've written graphics code only on more-or-less lab environments, and have read many papers mentioning screen-space algorithms which trades off accuracy for other characteristics. Maybe I'm triggered by that. lol


can't wait to see what they will be doing with multiplayer




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: