[Laser] fundamentals

Glenn Thomas glennt at charter.net
Thu Feb 15 16:59:51 EST 2007


Hi all.

At 07:55 PM 2/14/2007, James Whitfield wrote:
>Glenn,
>
>We do not seem to be converging on a solution.  I will concede that in your
>frame of reference "The only difference is...technology....Other than that,
>at the most fundamental level, photons are photons, no matter the
>wavelength."

I'm beginning to think that we're in "violent agreement"... ;-)

>That does not help me understand, or suggest improvement, on light
>communication systems to be built by amateurs.  From where I look at the
>situation, RF photons behave like waves, and have many off the shelf
>practical systems.  Optical photons act enough like particles that Newtonian
>optics are still useful.  There are differences between the behaviors of
>optical photons and RF photons.  I think we should be looking at those,
>still within the framework of fundamentals that you described much better
>than my previous attempts.  For me the real issue is building a working
>system, and maybe making it work better.

I think it does. Matching the bandwidth of the transmitted signal and 
the receiver is the first step in maximizing the SNR of the received 
signal. Slow speed MCW with a laser that is chopped at 800 Hz 
provides the opportunity to use a very narrow bandwidth (25 Hz? 
slower/narrower?) filter (at 800 Hz) to reject as much out of band 
noise as possible is one approach. Using an optical filter to reject 
light that is not the same color as the laser transmitter is another example.

Improving the probability of detecting the signal also suggests ways 
to make an amateur system work better. There hasn't been much 
discussion here on the use of detectors with improved quantum 
efficiency for weak signal detection, but that's one approach. There 
also hasn't been much discussion of frequency/color selection with 
respect to atmospheric absorption lines, the physics of Rayleigh 
scattering etc. It has always seemed strange to me that the NLOS 
laser experimenters always seem to use IR when scattering is more 
effective at shorter wavelengths.

>I enjoyed the comparison "Laser comms is just a little more advanced than
>the spark gap radios of 100 years ago."  It makes me wonder what the path of
>technology would have taken if the spark gap transmitter and galena crystal
>set era had DSP and FEC tools.  (Maybe there were steam powered data
>processors.)   I wonder if BPSK31 would have preceded voice signals?

Yeah. Einstien probably wasn't the only person who understood the 
basic mechanism that supports the photoelectric effect. It's a 
historical accident that lasers weren't discovered 100 years ago instead of 50.

>OK, enough silly stuff.  It took me a while to process the probabilities.  I
>think that you were assuming that the probability space was symetrical, but
>I assumed bias that can be exploited.  For example, astronomers seldom look
>for stars in the daytime.  To take pictures of dim stars, they use big
>lenses, and they try to find them on Moonless nights.  That way something
>that looks like a smudge on the picture is more likely to be something real
>than if you took the picture in the daytime.  This poor example translates
>to communication by suggesting that you need to look for and take advantage
>of bias that helps you communicate.

Lessee... astronomers look at night simply because the SNR is better. 
Ditto the use of big lenses/mirrors. A better SNR increases the 
probability that the signal can be detected, the two are coupled 
though I have to admit that I don't know what the quantitative relationship is.

>As an example of bias, it would be easy to wire up a laser for simple Morse
>code.  If I thought that I needed to send the message at 10 minutes per word
>( that's right, I meant to say 0.1 words per minute ) I could not couple the
>output of a photo sensor to a computer sound card to process the signal.  On
>the other hand, I could send MCW at 300 Hz, even if I had to put an electric
>fan in front of the laser beam to chop it up.  The sound card is biased for
>"sound" input.  If I wanted to send 50 WPM, I might not even need the sound
>carrier, but I am sure the sound spectrum would be odd.

Well, yes. The 0.1 WPM CW would allow a much narrower bandpass filter 
to be used than for 10 wpm, again resulting in an improved SNR. Of 
course, the 0.1 WPM could be modulated onto the 300 Hz signal and the 
narrow bandpass filer done digitally. It's not clear to me that there 
is any fundamental advantage to either system, though a practical 
bandpass filter may be easier to build at 300 Hz than at DC.

>Similarly I could setup a BPSK system.  The sound card might not care if I
>choose a 300 Hz tone or an 1800 Hz tone as my audio carrier.  But if I use a
>K3PGP front end, it certainly would make a difference to the overall system.
>It might even suggest that I move the tone down to 100 Hz or less

Of course, the usual data rate on BPSK is a LOT faster than 0.1 WPM 
CW and so will require a wider receiver bandwidth, which in turn will 
pass more noise power to the detector, which will require a stronger 
signal to maintain the same SNR. On the other hand, BPSK at the 
appropriate data rate to utilize the same receiver bandwidth as the 
0.1 WPM CW signal may be able to do a little bit better than CW.

On the other hand, BPSK of the laser itself is not practical at this 
time because the phase of the laser light is difficult to control. In 
fact, while laser light may well start out life looking coherent, it 
doesn't get very far from the laser before it's not at all coherent. 
The are several reasons for this. In this context, the Doppler spread 
of the laser carrier imparted by the thermal motion of the molecules 
in the laser medium will cause any phase information to be smeared 
into non-existence within a very small number of meters after exiting 
the laser.

>I do not think that we have adequately identified the biases in the systems
>that we are using.  There should be alternatives may work better or be
>easier to use if look into the shortcomings of the systems.  At least there
>should be some rationale to support standard practices.

I agree here, though "biases" seems an unusual word to use. The 
quantum efficiency of the detectors we use and environmental sources 
of noise and loss would seem to be fertile areas for work. For NLOS 
comms, the physics of atmospheric scattering would seem a good place 
to start looking for an advantage. For example, it's foolish to try 
to do laser comms on the same wavelength as one of the CO2 lines!

>James
>N5GUI
>
>
>The following is off the topic of the post:
>
>Some time we can discuss voting schemes.  Your example  is that by voting
>you reduce the probability of getting the correct answer.  I could not
>follow that logic.  If it were true, there should be a lot of airplaned
>falling out of the sky.

I think you've slightly misquoted me here. I said that voting would 
reduce the probability of getting the correct answer IF THE RAW 
PROBABILITY OF GETTING A CORRECT DETECTION IS LESS THAN .5. If the 
raw probability happens to be greater than .5, then FEC can and does work.

>I also did not follow your comments about forward error correction.  It
>seemed that you were suggesting that it is just redundancy ( repeating the
>message).  I do not claim to understand it, but I would describe it as
>mapping the message ( assumed to be N bits of information ) into a larger
>space in such a way that the information may be fully recovered even if many
>of the bits in the larger space get corrupted as it is transfered from the
>sender to the receiver.

Redundancy is all you really have to work with in the presence of 
noise. Redundancy can be fractional, it doesn't have to be in terms 
of whole bits. The scheme I adopted (voting best 2 out of 3) is a bit 
simplistic, but at least is amenable to fairly simple analysis. 
Mapping data into a larger Hamming space, as is done in various EDAC 
systems used in computer memories and communications, also uses 
redundancy, albeit less.

The typical EDAC system used with computer memory will correct a 
single bit error per word and detect a double bit error. You can buy 
this on a chip, see 
http://www.atmel.com/dyn/resources/prod_documents/29c516e.pdf for an 
example. The cost of this facility is that you must also store what 
are called "syndrome bits". For a 16 bit word (in fact up to 20 
bits), you must also store an addition 6 bits. Mapping into a larger 
space means that you must include more bits to unambiguously describe a point.

The voting system (call it VS) I described also maps a single bit 
data point into a larger Hamming space. VS space is three bits for 
each data bit, while typical EDAC has 22 bits for 16 bits of data. 
Both systems can be fooled by sufficient noise. Both systems will 
correct a single bit error. VS will not detect a double bit error 
while the EDAC system will at least detect it. Of course, in the VS 
system requires 2 out of 3 bits (66% error) to be bad to get a double 
bit error while EDAC can handle only 2 bad bits out of 22 (9% error).

With three bits of error, both systems fail. VS provides a single 
data bit and it is wrong. EDAC with three errors out of 22, takes a 
different point in it's Hamming space and returns the data associated 
with a data point other than the sender intended!

As you pointed out, EDAC systems can be devised that perform better 
than either of these, including multiple bit corrections. However, 
these require a larger Hamming space, which in turn requires more 
bits to uniquely define each point, which increases the amount of 
redundancy required for the system.

>One more comment.  You used the term quantization noise in regard to
>digitizing a sample.  I follow the idea of quantization noise if you are
>trying to digitize a smooth function like a sine wave.  A while back I was
>trying to imagine the wave form composed of 64 different frequency square
>waves each having the value 0 or 1.  They would for a complex wave but their
>sum could only have 65 discrete values ( 0 to 64 ).  The representation of
>that wave form would be exact, so I am thinking that there is no quantizaton
>noise.  Have you run across anything that would clarify that?

I think so. How precisely you generate the transmitted signal is 
irrelevant because it will be corrupted by the ever present channel 
noise. Thinking you can pick out which level you started with is 
really the same process as deciding if you've detected the signal or 
not, the only difference is that with 65 levels, noise doesn't have 
to corrupt it as much for you to make the wrong choice. Even if 
you're digitizing a square wave, some of your samples will include 
the 1-to-0 or 0-to-1 transition, and that constitutes quantization 
noise. Even if you claim to be sampling at the transition times, you 
need the information that tells you when those transition times are 
and channel noise will corrupt that information as well.


73 de Glenn Thomas WB6W






More information about the Laser mailing list