[Laser] fundmentals
Glenn Thomas
glennt at charter.net
Wed Feb 14 04:12:39 EST 2007
At 07:51 PM 2/13/2007, James Whitfield wrote:
>Glenn,
>
>Thank you for the response.
>
>I disagree that the "one and only" difference is that radio "wavelengths are
>somewhat longer." Unless someone has amateur tools that I have not heard
>about yet, we cannot use phase or frequency modulation techniques on the
>photon stream that are commonly used for communications with radio waves.
The only difference is in the level of technology. Laser comms is
just a little more advanced than the spark gap radios of 100 years
ago. Other than that, at the most fundamental level, photons are
photons, no matter the wavelength.
<snip>
>I will have to take a little time to digest the probability math numbers.
>There are some differences in the way that I think of probabilites and what
>you must be trying to suggest. I was taught that the worst probability for
>an event is 0.5. If the probability of you being correct is less than 50
>percent, then you simply use the inverse which by definition will then have
>probability greater than 50 percent. That does not seem to relate to what
>you said. I accept that FEC works quite well. ( It seems a little more
>complicated than how a middle value selector works in a fly-by-wire flight
>control system, but I have worked with those for two decades. )
If you are unable to correctly determine if you have detected a
signal more than half of the time, it's clear that you cannot detect
the signal. For example, if the probability p that you can detect a
signal is 0.25, then the probability that you CAN'T detect the signal
is (1 - .25) or 0.75. In other words, you'll be wrong 3 times out of 4.
FEC does work very well when p is high. For example, if p = 9.9E-1,
corresponding to a BER of 1.0% (which is usually taken as nearly
unusable), the triple redundant FEC model I described will have a
system p of (.99**2)(3-2*.99) or 0.9997, corresponding to a BER of
2.9E-4. In this case, FEC improves the BER by nearly three orders of
magnitude. It makes FEC look great, but only when the raw detector p
is fairly large.
>Maybe what I am suggesting is that we try to connect an A to D converter of
>equivalent to sound card quality, or better, directly to a photodiode
>receiving an ON-OFF-Keyed data stream. Any amplification of the signal from
>the photodiode would need to be DC coupled and have no more noise than is
>introduced by a K3PGP preamp and sound card.
This is another way to do it. However the bandwidth and p criteria do
not change. You would want to use a low pass filter to limit noise
that falls outside your signal bandwidth. However it may be
impractical to filter noise that is outside the low end of your
signal bandwidth, so the scheme would suffer a little because of the
extra receiver bandwidth. The value of p for the PGP front end and
sound card will be held hostage at the low end of signal power by
amplifier noise or quantization noise and by overload distortion at
the high end. Your scheme could certainly work, but what reason is
there to think it would work significantly better than 800 Hz OOK or
any other scheme?
<snip>
>I also think that I was wrong to suggest that sampling of a square wave
>could be done with only two states.. I don't understand the number of bits
>needed to sample a signal, and the practical sampling rate seems to be
>about three times the highest frequency component.
The more bits in the sample, the more the potential dynamic range of
the sample, or alternatively the less quantizing noise. As for
sampling rates, Nyquist states that the minimum sampling rate is
exactly twice the highest frequency component. Practical circuit
considerations and a desire to avoid aliased signals usually dictate
a slightly higher sampling frequency. For example, for a maximum
frequency response of 20 KHz on an audio CD, the standard sampling
rate is 44 KHz.
>James
>N5GUI
73 de Glenn wb6w
More information about the Laser
mailing list