Hi again everyone
I thought it might be helpful to write a short description for anyone wishing to gain a better understanding about the role of the all-important 'Timing Constants' around which the QLAN NET driver is designed. I plan a more full description in the second article in the 'Networking the QL' series (haven't read Part 1 yet? See earlier in this thread.)
All-in, there are 16 such Timing Constants (TCs) used in different situations or during the various phases of sending or receiving a packet in the Sinclair Network Standard protocol. In the earlier NET driver versions included in QDOS and Minerva, these timing constants are effectively hard-coded in to the software timing-loops that are used to introduce the required time-delays.
The NET TCs are word-sized values used (mostly) as a count to a DBRA loop to 'waste' CPU cycles at suitable points in the code and, it should be noted, not directly linear in terms of the effective time delay they introduce due to the fact that the loops they appear-in vary according to what else is being executed within the loop itself. Thus the absolute value of each TC does not necessarily relate to the value of any other but, as you would expect, the larger the TC value, the longer the introduced delay.
In later versions of TK2 (perhaps it was always thus?) - which include a complete rewrite of the NET driver - these TCs are helpfully exposed and stored within an extended driver 'Physical' Definition Block within RAM and can be usefully reviewed and manipulated there should the need arise. The 'nettime_bas' program from the SGC ROM source does precisely this by first identifying the address of the (active) definition block for the NET driver and then extracting/modifying the relevant word-sized values at known offsets.
In some versions of the TK2 NET driver, some of the crude DBRA timing-loops are replaced with clever use of m68k instructions that can be made to take longer or shorter amounts of time programatically - e.g. ROR and similar with different values in the Data register that specify how many bit positions to rotate the operand. Use of such techniques typically allows more fine-grained control, at the loss of overall resolution - i.e. only useful for very small delays.
NB. The redesign of the NET driver in TK2 means that the actual TC values used are very different than those hard-coded values in the original QDOS driver, but they achieve the same thing.
[Aside: Given the availability of a high-precision hardware Counter/Timer in the Q68, it was possible to re-design the NET driver for the Q68 (ND-Q68) around the HW Timer and dispense with the nasty software timing-loops altogether... None the less, TCs are used in ND-Q68 in precisely the same way as native QL hardware/drivers, but here instead the TCs are direct reflections of the time delay they introduce (and are long, instead of word sized...)]
Fortunately, of the 16 TCs, only three are especially critical and typically it is only one or more of these that may need adjustment to get two otherwise recalcitrant NET stations to communicate (more) reliably. I'll just discuss these three TCs here, but bear in mind that all 16 TCs play important roles within the Sinclair protocol definition, especially in relation to 'timeouts.'
A quick revision of the format of the serial data down the wire may help here; within both the NET Header and the Packet/block that follows it, each 8-bit byte occupies a 14(ish) bit 'frame' composed of a couple of active LEAD-bits, one inactive START bit, 8x DATA bits followed by a one or more active STOP bits. The 13 & 14'th bits are really just the time it takes to loop round again, whilst the NET output remains active. The Sinclair standard defines the bit-time at 11.2 microseconds each.
As will be familiar to any UART programmers out there, the most important aspect in receiving asynchronous serial data is in the detection of the (falling) edge of the START bit. For this, we are entirely reliant on the CPU cycle-time and the design of the driver code to detect the START bit as rapidly as possible - we don't have any hardware to do this for us.
Once detected (which may be a few microseconds after it actually occurs), we can now start 'counting time' and here comes in the first of the three critical TCs:
1. "ndt_rdly" (Read DeLaY) times the delay between detection of START and the mid-way point in to the first DATA bit - Bit-0 as it happens (LSBit first...). This may be 1.5x the usual bit-time, or else a 1/2 bit time to be followed immediately by a full bit-time, depending upon driver 'vintage' - the effect is the same. We thus 'skip' over the START bit altogether once detected.
2."ndt_rbit" (Read BIT) is the second important TC and is used to time the delay between each subsequent Bit - 2 through 7 (and in later versions of Minerva, which go-on to check the presence of the first STOP bit.)
On the sending side, only one TC is defined and used across the whole byte-frame:
3. "ndt_send" defines the timing for each bit - including the START, DATA and STOP bits, which are effectively combined in to a 12-14 bit word. Whilst its effect on the delay between bits equals that of ndt_rbit, ndt_send may or may not equal the same, absolute value as ndt_rbit - again, depends on the design of the various driver versions.
In practice (or 'In my experience'), once you have two stations almost talking to each other, it is only "ndt_rdly" that needs further refinement to get reliable comms.
Finally, don't forget that, given the 'handshaking' used in the protocol, to send successfully, not only your send-timings but your read-timings must also be aligned with the peer-station - and vice-versa.
Last edited by martyn_hill
on Sun Jan 31, 2021 6:42 pm, edited 1 time in total.