How accurate is this two-pass approach in general? From my outsider's perspective, it always looked like most of the difficulty in implementing Lanczos was reorthogonalization, which will be hard to do with the two-pass algorithm.
Or is this mostly a problem when you actually want to calculate the eigenvectors themselves, and not just matrix functions?
That's an interesting question. I don't have too much experience, but here's my two cents.
For matrix function approximations, loss of orthogonality matters less than for eigenvalue computations. The three-term recurrence maintains local orthogonality reasonably well for moderate iteration counts. My experiments [1] show orthogonality loss stays below $10^{-13}$ up to k=1000 for well-conditioned problems, and only becomes significant (jumping to $10^{-6}$ and higher) around k=700-800 for ill-conditioned spectra. Since you're evaluating $f(T_k)$ rather than extracting individual eigenpairs, you care about convergence of $\|f(A)b - x_k\|$, not spectral accuracy. If you need eigenvectors themselves or plan to run thousands of iterations, you need the full basis, and the two-pass method won't help. Maybe methods like [2] would be more suitable?
"Esports" is not a league. That would be like saying "sports" is a league.
There are leagues around some games (like the ones mentioned in the article). There are also events with "league" in the name that are not really leagues (like ESL Pro League). In any case, none of them are financially successful in the US.
The only series that release "every 12-18 months" are sports games and Call of Duty, and I can assure you that the overlap between that audience and the Persona one (which has five main-series entries of which barely anyone has played the first two) is extremely small.
Have you considered that you may just be very out of touch?
Nice writeup and algorithm. My first instinct would have been to take the lazy way out and just use a table with 365 entries (or 366, or both depending on the implementation), but this is a lot more elegant.
First of all, you are correct that nonlinear optics usually requires high field strengths. But...
>Photons are bosons and therefore are very reluctant to interact, essentially requiring nonlinearities.
Please don't throw out random sciency terms. First of all, interaction is pretty much by definition nonlinear. Second, photons are not reluctant to interact. Photon-photon scattering is negligible (which has nothing to do with them being bosons, as gluons and mesons readily demonstrate), but nonlinear optics doesn't rely on photon-photon scattering.
For completeness, there are languages that do not require stacks to implement (nor any workarounds that in practice emulate a stack). If you do not support explicit recursion, you don't need one. All variables can be statically allocated and only a single return address needs to be stored for each function, so you do not need a stack for return addresses either.
The most famous example is FORTRAN, until 1990 when they added RECURSIVE directive.
The usual technique is to keep a 4-element array of sums (so sum[j] is the sum of all terms of the form a[4*i + j] * b[4*i + j]), and then take the total at the very end. This allows for the use of vectorization even with strict IEEE-compliance.
Generally, I would recommend against -ffast-math mostly because it enables -ffinite-math-only and that one can really blow up in your face. Most other flags (like -funsafe-math-operations) aren't that bad from an accuracy standpoint. Obviously you should not turn them on for code that you have actually tuned to minimize error, but in other cases they barely ever degrade the results.
The mathematics are sound, but the reasoning around is unclear to me. The derivation shows that Lorentz transformations and Galilean transformations are the only ones that allow for the equivalence of all inertial frames, which is a nice result. But it clearly does require the additional assumption of an invariant speed to conclude that Lorentz transformations are anything more than a mathematical curiosity.
So what have we really gained? Since we still need the extra assumption that an invariant speed actually exists, we could've just gone the other way and done the light clock calculation to get the Lorentz transformation instead.
I agree that the paper's title is somewhat misleading, since you still do need to assume an invariant speed to rule out the Galilean transformations.
However, this derivation does greatly narrow things down before the invariant speed comes in: at the point where the invariant speed is assumed, you already know that there are only two alternatives: an invariant speed (Lorentz transformations) or Galilean transformations. So it's much easier to see why you would assume an invariant speed; the assumption isn't just pulled out of thin air at the start, it is seen to be one of only two alternatives that are compatible with the principle of relativity.
>to see why you would assume an invariant speed, the assumption isn't just pulled out of thin air at the start
It's not out of thin air, it's from a very empirically successful theory: Maxwell's electrodynamics. The problem back then was that this theory was not relativistic, i.e. the speed of electromagnetic wave in Maxwell's equations was the same under all reference frames. So you either abandon the idea that laws of physics remaining the same under all reference frames, OR abandon Galilean velocity addition.
Einstein's approach was modifying the latter so that it fits with the former, by imposing invariant speed. This was written in Einstein's original paper. It's not a mystery assumption.
It's also a very common procedure: two empirically successful theories have conflict and you need to resolve them by building something larger than both and reducing to both under limit.
I also agree we have gained insight into how kinematic structure is derived from algebra + physical constraint. Though you still need the physical insight to choose which physical constraint.
> It's not out of thin air, it's from a very empirically successful theory: Maxwell's electrodynamics.
That was the source of the assumption, yes--as you point out, Einstein said so in his original paper. But from the standpoint of mechanics, as opposed to electrodynamics, it was pulled out of thin air. There was no reason based on mechanics to make any such assumption. In fact, everyone else except Einstein that was working on the problem was looking at ways to modify electrodynamics, not mechanics--in other words, to come up with a theory of electrodynamics that was Galilean invariant, rather than to come up with a theory of mechanics that was Lorentz invariant.
> you either abandon the idea that laws of physics remaining the same under all reference frames, OR abandon Galilean velocity addition
Or, as above, you look for a Galilean invariant theory of electrodynamics. Of course we know today that that is a dead end, but that wasn't known then.
That is true. Still, we are kind of trading one unintuitive postulate (an invariant speed) for a different one: Why would we ever think that the time interval between two events can depend on the reference frame?
Sadly, I feel like SR can only really be "understood" as a complete theory. All the individual phenomena (time dilation, length contraction, relativity of simultaneity, constant speed of light etc.) are very hard to understand, because you cannot just take one of them and add it to classical relativity without immediately running into paradoxes. Only once the whole picture is known you see that all the pieces beautifully imply each other. This problem applies to every approach to the subject I've seen.
> Why would we ever think that the time interval between two events can depend on the reference frame?
This isn't a postulate, it's a derived theorem. That's true no matter what axiomatic formulation you use.
> All the individual phenomena (time dilation, length contraction, relativity of simultaneity, constant speed of light etc.) are very hard to understand, because you cannot just take one of them and add it to classical relativity without immediately running into paradoxes. Only once the whole picture is known you see that all the pieces beautifully imply each other.
This is all true, but all of these things you talk about (except the speed of light) are not postulates; they are derived theorems. No approach to relativity that I'm aware of has ever tried to start with any of these things as postulates. Even Einstein's original 1905 paper didn't start with any of these things as assumptions. He started with the principle of relativity and the speed of light being invariant. This paper is just showing how to derive at least a part of the second assumption from the first.
"Let us consider two events, E1 and E2, at the same spatial location in frame O, but separated by a time difference τ. In O′ the two events are separated by a time lapse T."
If you didn't know special relativity, you would never get this idea.
> If you didn't know special relativity, you would never get this idea.
You're looking at it backwards. The paper is not assuming anything here; in fact it is explicitly refusing to assume that we know the correct transformation law between frames. That means we have to leave open the possibility of the time difference changing, not because we know SR, but because we are being logically rigorous.
> Sadly, I feel like SR can only really be "understood" as a complete theory. All the individual phenomena (time dilation, length contraction, relativity of simultaneity, constant speed of light etc.) are very hard to understand, because you cannot just take one of them and add it to classical relativity without immediately running into paradoxes.
This is because you implicily used the (wrong) postulate that "phenomena of special relativity can be iteratively added to a a lassical description of a non-relativistic theory".
Yes but this paper isn't isn't an island unto itself. It's a nice little lemma that can illustrate a point within the subject of special relativity, from a different perspective. It could be included as a small derivation in a textbook, or lecture notes, or even broken down as an exercise for the reader.
In my experience approaching the same idea from many different perspectives yields a deeper, richer understanding of the underlying concept, and offers a path for someone to reach the big picture understanding you mention.
MATLAB had its first commercial releases in the mid-80s and had indexing syntax that looks pretty much the same. Since it got popular very quickly, I assume that it is the most immediate source of Fortran 90's indexing style.
ALGOL 68 had some form of array slicing as well, but I'm not sure how influential it really was in this department.
Or is this mostly a problem when you actually want to calculate the eigenvectors themselves, and not just matrix functions?