[Elecraft] Re: SDR - K3 - and GUI Interface

Brian Lloyd brian-wb6rqn at lloyd.com
Wed May 23 22:22:04 EDT 2007


> We need to hone the definition of a SDR radio.

That doesn't seem that hard to me but I am probably using it more  
broadly than some. I think that most people want to use the  
definition where they pump broadband signals into an A:D and then do  
all the filtering and detection in software (DSP). This allows the  
receiver to be used at any frequency below the Nyquist frequency (1/2  
the sampling rate). The state-of-the-art right now is something like  
60Msps at 16bits of depth. That implies a radio that can cover all of  
the HF spectrum, i.e. 0-30MHz, all at once and have a dynamic range  
of 96dB.

OTOH, I am willing to broaden this definition to include some analog  
bits, e.g. preamp, first mixer, first IF with roofing filters, second  
mixer, A:D at second IF of up to about 200KHz with about 20 bits  
(120dB) of dynamic range. All modulation and demodulation takes place  
in software following the A:D. There is no conventional demodulator,  
e.g. product detector.

OTOH some might consider the second mixer to actually be a product  
detector and the second IF to be 'audio'. Since demodulation takes  
place in software and the software also generates some or all of the  
AGC signal used to reduce the gain of the analog components, I  
consider this "audio" to actually be IF. If the software then  
controls all of the other functions of the radio and every aspect of  
the analog functioning is also under the control of the software, I  
would say that this constitutes an SDR.

I do recognize that this is a rather open interpretation but I also  
think that it is the best that we can do today at HF given the SOTA  
(sorry, state-of-the-art) of A:D, D:A, and DSP. I suspect that will  
will achieve greater flexibility in the not-too-distant future as A:D  
gets deeper (bit depth) and faster (sample rate).

> We need to hone the definition of User Interface

That one is dirt simple. It is how the person using the device  
controls the device. The knobs and switches on a standard radio form  
one form of user interface. The screen, keyboard, mouse, and software  
of the standard PC along with the software forms another kind of user  
interface. The steering wheel, throttle, brake pedal, and dashboard  
of a car form yet another user interface.

The problems of user interface have to do with how easy it is to  
learn and how easy it is to use. These are NOT the same. The average  
"desktop" metaphor used with most computer systems is relatively easy  
to learn and a pain-in-the-butt to use because you end up having to  
make lots of monkey-motion to get even the simplest task  
accomplished. Contrast that with the command-line interface of Unix  
which is the polar opposite: daunting to learn but amazingly powerful  
and simple to use.

Here is another example I think almost everyone can relate to: the  
average 2M HT. Back in the days of the venerable Icom IC-2AT things  
were dirt-simple: dial in the frequency (precious little training  
needed there), set the offset, and key the mic. Then along came the  
microprocessor. Programmers discovered they could do all SORTS of  
clever things inside the HT, most of them useless to a person using  
the hand-held. This required them to come up with all sorts of arcane  
key sequences, menus, and 'soft keys' to control the device, all of  
which were done badly and painfully. Sure you can spend the evening  
programming all the memories but now I want you to hop in your car,  
drive out of your normal operating area, and just manually enter a  
new repeater as you drive along. Better still, manually scan for a  
local repeater and get on it. Painful, isn't it? Now imagine an  
IC-2AT that just adds one more thumbwheel -- CTCSS tone -- and  
imagine how it would be. Add one more feature, a read-out of received  
CTCSS tone, and you can probably get on any repeater you can find and  
hear.

The point is, the user interface on almost every device with a  
microprocessor SUCKS! Notable exceptions are those things with single- 
function switches and knobs.

Now think about your average hard-wired HF transceiver. Not a lot of  
differences. I can sit down at most HF transceivers and make them  
work ... up until they started added microprocessors. HW101? KWM2?  
FT-101E? No problem. IC-706? Not without a manual in front of you you  
won't. But add the switches on the front panel back in and it becomes  
easier. A toggle switch labeled "speech processor" and another  
labeled "VOX" is a no brainer, as is a rotary switch labeled "IF  
bandwidth 6.0k 2.7k 1.8k 800 500 200".

And then there is the concept of customization. Puh-leeze! Why do we  
need 49 different ways to do the same thing? All that accomplishes is  
to ensure that no one else can use your device.

Sorry. I got carried away.

Bottom line, single-function knobs and switches have a very nice  
quality. I can look at them and know what they do. Maybe we can move  
some (all?) of that into software and put it on a screen. If so, do  
not give me only 10 buttons and interminable menus. It may seem  
elegant at first but when you are trying to use it for real, it gets  
old quickly.

(I am saying I LIKE the UI on the K3. OTOH, I can see needing a way  
to add too it at some time. To this I *don't* have an answer. LEGO- 
block switches and knobs anyone?)

> We need to look under the hood to see the technology and how that  
> impacts performance

That too.

73 de Brian, WB6RQN
Brian Lloyd - brian HYPHEN wb6rqn AT lloyd DOT com




More information about the Elecraft mailing list