Hacker Newsnew | past | comments | ask | show | jobs | submit | santix's commentslogin

Jacked: The Unautorised Behind-the-Scenes Story of Grand Theft Auto - David Kushner

The Soul of a New Machine - Tracy Kidder





Will do, thank you!


There's also "7-7: we're going to heaven; 7-5: somebody else wants to fly; 7-6: radio needs a fix".




That's cool. Apart from the article being good, the design of your page is very nice.


Thanks, I appreciate it


I would love to read an ELI5 explanation on how it is possible to decode something that is under the noise floor.


The sibling post did a good job at outlining some techniques. I’m going to give you a simple example that might help with “ahhh you can get stuff under the noise”

Let’s say you have a noise source made up of random numbers from -1 to 1 (mean 0). And a signal that represents a binary 1 as 0.1 and a binary 0 as -0.1. Our binary signal gets added to the noise.

With one bit and one noise sample, we don’t really get much out of it. 0.567 - 0.1 = 0.467 and 0.567 + 0.1 = 0.667. Looking at 0.467 and 0.667, we can’t really make any judgement of whether either of those samples is a 1 or a 0.

If you extend your bits out though so that, say, one bit gets transmitted 100 times, then you can take 100 samples on the receive end and take the mean of those. Because the noise source has mean zero, the noise component of the (noise+sample) mean should come out around zero. So you get a mean of maybe -0.075, or a mean of 0.083. At that point, it’s reasonable to say “it was likely a -0.1 or 0.1” that was transmitted.

All of the fancy techniques enhance this process, but at its core that’s fundamentally what’s happening. Some of the techniques spread things out over different frequencies, some spread out over time, but it’s all roughly the same idea.


I dont know why you're being downvoted, it's a truly fascinating concept.

One method has your input data taken bit by bit and combined with a pseudo random code, which never changes and is pre shared with all participants. Effectively each bit gets transmitted at multiple frequencies concurrently and as the receiving side knows the pseudo random code, it uses statistical inference to decide if it's seeing enough evidence of a 0 or a 1 in the various frequencies as dictated by the code.

You pull something out of the noise floor, without applying the statistical methods the received signal is indistinguishable from noise. It's certainly not possible to look at 1 frequency and decode the transmission because the transmission medium is lossy and constantly corrupts the transmission of individual bits randomly


Probably the simplest explanation I could give would be: when you're communicating, you can always repeat your message to have a better chance of it being received. The example I like to use to remember this concept is one of people speaking in noisy places. When someone is having trouble understanding you, some options are to talk louder, talk slower, or repeat what you said. However, in the example given, the power is fixed, so talking louder isn't an option.

A more complicated explanation: the fundamental reason why this is possible is due to Shannon's channel capacity theorem [0]. This theorem tells us that the parameter that tells us whether we can communicate reliably is not the signal-to-noise ratio (SNR) but is instead the Energy-per-bit to Power Spectral Density ratio (Eb/N0). The difference is that Eb/N0 accounts for the total energy dedicated to sending a bit, whereas SNR only accounts for the rate at which you send that energy. The channel capacity theorem further tells us that the minimum Eb/N0 required to communicate reliably is about -1.6dB [1]. In the context of Olivia MFSK, the article claims that this communication scheme can communicate at -10dB SNR, which is possible as long as the waveform does something to increase its Eb/N0. The article says that Olivia MFSK uses error correction codes, which is one way to increase Eb/N0. Essentially, error correction codes add redundancy to the transmitted bit stream to correct errors. The simplest example of error correction is the repetition code in which, for every bit that you want to send, you send an agreed-upon number of copies. The more copies you send, the less likely it is that over half of them will be wrong. As you might imagine, there are also much more complicated error correction codes. Another way to increase your Eb/N0 is through Direct Sequence Spread Spectrum (DSSS), which is the technique that Craig mentioned.

If you're interested, [2] is a good reference book on digital communications, and [3] is a detailed, but still very readable, text on information theory.

[0] https://en.wikipedia.org/wiki/Channel_capacity [1] https://en.wikipedia.org/wiki/Eb/N0 [2] Proakis, John, Masoud Salehi. "Digital Communications." (2008). [3] Cover, Thomas M., Joy A. Thomas. "Elements of Information Theory." (2006).


Well that is sort of a misnomer, as it depends on your receiver bandwidth. They always say such and such modulation (say JT65) is "under the noise floor". Of course it is when your bandwidth definition is 2.5 KHz (an HF SSB channel). But the symbol rate for JT65 is maybe 10 Hz, so if you filter to 10 Hz, it isn't under the noise floor.

Same with GPS, sure it's way under the noise floor of your receiver BW is at the 2 MHz, but once it is de-spread to the information bandwidth, it is not under the noise floor.

You can pretty close to Shannon's limit, which I suppose is under the noise floor at it's -1.6 dB limit, but in practicality you need extra margin, then you can usually see the signal with the proper filtering.


I haven't read the book (yet), but I believe this talk is a good intro: https://www.c-span.org/video/?196682-1/the-power-broker


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: