[Laser] fundmentals

James Whitfield n5gui at cox.net
Mon Feb 12 18:29:11 EST 2007


The fundamental goal of optical communication is the detection of stream of photons and decoding the information that it contains.

So far the focus here has been on the conversion of the rate of flow of photons to a similar flow of electrical current which then drives one of two devices, depending on which type of signal processor is expected to decode the information.  For MCW or voice information, the device used is a speaker so that the human auditory system processes the signal.  For computer data and particularly signals too weak for auditory processing, the device is a computer with a sound card using signal processing routines to generate text or graphic output for the human visual system.

As much as I prefer working with the auditory versions, I am trying to understand the issues of the weak signal systems.  The process of encoding data involves producing tones which contain information in their frequency or phase. ( Could be amplitude, but that does not seem useful for weak signal work. )   This process seems to be largely modeled after modulation of information on radio carriers, but since we do not have acceptable means of controlling the phase or frequency of our transmit devices, we control the amplitude ( pulse or linear ) and impress a sub-carrier of audio tones, which we then can control in frequency, phase, or amplitude.

On the receive side, we use DSP and other tools to detect when the tone exists, or perhaps which phase it is in.  But the process of detecting a tone is in part detecting when the photon stream is present, or its value relative to a reference, or at least increasing versus decreasing. I am wondering how much information is needed to detect a tone, or determine its phase, compared to detecting the stream of photons that is "carrying" it.

For example, an MCW signal consists of the key down time, key up time, AND also all of the information about the tone carrier - phase, duty cycle, chirp, harmonic content...  From an information perspective, only the approximate key up and key down times are relevant.

Similarly, a PSK signal consists of the time each pulse ( or sine wave ) rises and the time when it declines, which when combined communicate the phase and frequency of the signal.  This information is repeated many times within each message element, which is part of the reason that on radio the system is very robust even when weak and in the presence of interference and fading.  Still, it certainly seems that far more information is contained in the signal than is needed.

FSK as an encoding method, certainly depends on frequency data, or at least frequency change data.  Radio systems use different forms to attack different problems, such as unstable frequency control elements on either the receive or transmit side, narrow banding to stack more users into limited space or deal with noise.  Optical systems should not have frequency stability problems or limits on available bandwidth.

On the other hand, these tone based systems provide redundancy and some level of confidence that the signal even exists.  They may not compare to forward error correction for those factors, but at the very least they provide us a way to use the techniques developed for radio communication on optical systems.

So, how much "information" has to be delivered to the receive device to process tones?  I am guessing that we can represent a square wave tone as two valued samples, and at least two samples are needed for the highest frequency component.  For a 320 Hz PSK signal, that translates to 640 bits per second, or 20 bits to distinguish between two states: a one and a zero.

For "Laser Scatter" by K0SM, there are 82 tones between 20 and 31 Hz that are sent for 10 seconds.  Does that represent 620 bits to distinquish which one of 82 states.  

Can these be compared?  20 bits for two states versus 620 bits for 82 states?  Is it as simple as 10 versus 7.561?  

If you tried to compare a BPSK encoded tone at 600 Hz to one at 250 Hz, would you get a theoretical improvement with the 250 Hz tone, but then not be able to realize it when you wind up decoding it with a sound card that has a 10KHz front end?


To simplify further, can we focus on methods to detect "on" versus "off" states of the photon stream instead of extracting a tone impressed on the photon stream.  If we did have an OOK stream of bits, would we need to impress FEC to deal with weak signal issues?   How would that compare to encoding the data on tones?

I suppose this would also require a different way of attaching a computer to the photon sensor.  At the moment the closest thing I can relate to that would be sensors used in astronomy, but they are usually adapted to image processing.  It seems like a problem that requires different and more creativity than I have.



James
N5GUI



More information about the Laser mailing list