Extended expansion connector...

Nagging hardware related question? Post here!
Nasta
Gold Card
Posts: 444
Joined: Sun Feb 12, 2012 2:02 am
Location: Zapresic, Croatia

Re: Extended expansion connector...

Post by Nasta »

Peter wrote: Nasta: The HF return path length is not what matters for the inductance, but the area encircled by it (and the signal line).
Exactly what I said. The reference to length was re the original motherboard that has no ground plane.
If both PBCs have decent ground planes (and I don't see why there would be a large hole in those planes) even a wide connector with far GND pin opens only a small area, as long as it is not deep. For QL style signals I would not consider the connector as deep.
ISA (16-bit) has a few more ground lines interspersed, and the empty area is actually a bit smaller due to vertical connectors being used on a motherboard with 4 layers and decent buffering, not like the QL. Still it runs at about max 2x the speed legally and when the 16-bit extension is used - so it's not a huge improvement.

Anyway, the area, assuming you use SP0..3 as ground is about 21 pins long but it's also wide enough to not be simply dismissed. Two right angle connectors are used which makes every mating pair of pins a small half-turn inductor and also lengthens the horizontal path through the conenctors. Although the ground plane might well wrap around the connector pins, it could just as well end right where row c would be on both sides of the mating pair. The width comes to around 2.5cm or so. Not too terrible if you look at a single signal in isolation, but because signals are central and grounds (and I am including Vcc if it's properly decoupled) are on the outside, most of the loops co-incide especially for the the signals in the middle, and they interact to a falling degree with the signals nearer the edges of the connector. Things are slightly worse if SP0..3 are not used as grounds, extending the length of the cut-out by 1 cm. Even that is not much of a problem assuming regular QL signals - but here we are talking about extending the connector and running it potentially much faster, and I'm not talking higher clock rates. Today's components even if ran at QL clock rates are faster, and not by trivial amounts, so in many cases the bus was already ran faster. The edge rates are a problem, even the most humble HC chips and GALs will trample old style TTL in thisrespect, not only are edges faster under normal conditions (5V, room temp - more likely since they do not generate lots of heat), the output impedance of the outputs is lower so even less matched to the unpredictable characteristic impedances of the lines.

The example I made was to show that even if everything was done ideally given the current layout (i.e. proper ground planes on both ends) there is an underlying fundamental problem that limits what you can do - and in realistic cases it can get worse.
The point being, IF a 3rd row is added for any reason, it must address this problem especially if it adds signals - more loops, more interaction - and with a 3rd row, an even bigger opening in the ground plane. Reasoning out a multiplexed protocol extension was done to show that a non-nultiplexed one BARELY gets you very few extra ground pins - and it's questionable if that should make it run reliably at anything more than present speed, though with a wider bus.

To conclude, the current layout can be expanded in a useful manner even if only a few lines are used differently. Perhaps only one ground pin can be added (at the location of VPAL - better one than none), and it's use would be optional - if a peripheral is designed to also run on the regular QL, it should leave this pin unconnected. 2-row peripherals specifically made for this new J1 would put a ground plane connection on VPAL.

It is important to note that this implementation would cater for regular style QL hardware running at QL speeds or slightly higher and this assumes producing the signals with slow hardware or conditioning them to appear that way if produced by quick hardware. Clearly this is not always possible any more even with legacy boards. Replace an old EPROM with a CMOS one and already it's a different story.
For more speed (and especially if a wide bus is included in the deal) - and note, I am talking edge, not clock rates here - proper ground plane extension through the connector must include more than just one extra ground pin. Aside from replacing the connector entirely or changing the layout completely, a third row is the obvious solution, and would be required tu run wide or fast hardware optimally.

The simplest proposal is a third row with ground pins - from direct experience, one ground pin in every 6-pin 3 row 2 column group will do the trick to well in excess of 100MHz for straight connectors, perhaps somewhat lower for right angle (keep in mind a mating pair of right angle connectors presents half a turn of an inductor for every single pin!). However, this only gives us 16 usable extra pins. Even going one ground per each 3x3 group gives us 21 pins for signals, still not sufficient. So, that's why I thought it a good idea to do a multiplexed option, which also uses less pins hence less power and less problems.

The ideal 3-row version would be ground on the middle row, but since that's not compatible, we go for ground in either top or bottom row. In theory, bottom row has some advantages as the orientation notches align between 2 and 3 row connectors that way, but it's mechanically problematic as the 2-row board does not align in height so it's a real temptation to plug it into the wrong 2 rows. At one point I though up a sort of combination of right angle and straight pin 3-row with short straight pins in the bottom row, so a PCB would be wedged between the mid and bottom rows, but this is really tricky to do and solder, and then the bottom layer of the PCB would have to be routed through vias to a mid layer ground plane. So, it went back to the top row being extra. This is still easy to extend onto a backplane.

Running a mixed bag of new and old peripherals on such a system will of course result in slower attainable speeds (which is one reason why there were provisions to have the speed chosen by the user!) but the idea is to eventually replace the old with the new (perhaps way too optimistic but at least that's the theory).

The purpose of full backwards compatibility would actually be to enable development using existing peripherals initially, but with the idea to eventually replace them with new ones. These could perhaps run faster on this extended bus but that would largely be a side effect. Also very simple peripherals (short lines from J1 to the hardware on the board) could still run reliably with a regular 2-row, and such could be designed to also work on regular QLs.

All that being said, given what is planned to be included onto such a board, there is simply no sense in connecting some legacy peripherals to it at all, so consequently no sense in attempting to produce a new bus specification which is also 100.000% compatible with the old (nevermind that the old is not 100% compatible with the old :P ). For instance there would certainly be no sense in connecting a GC or SGC, designed to replace the on board CPU and add RAM, to a system that already has a faster CPU and more RAM. Ditto for extra RAM on an 8-bit bus, maxing out to a total of 896k, added to 16 meg or more RAM that also happens to run 20 times faster. However, it does make sense to add simple 8-bit peripherals including their on-board ROMs at least initially. One could argue that adding a floppy controller, parallel port, mouse etc on board is sensible, but take a look at where computing is going - no floppies (indeed no hard drives - flash based media instead), no parallel ports, no mouse and keyboard, instead there is USB. So, it stands to reason that a 'IO pack' board on J1 will eventually get replaced by a more modern one, while the CPU and RAM core will not.


Nasta
Gold Card
Posts: 444
Joined: Sun Feb 12, 2012 2:02 am
Location: Zapresic, Croatia

Re: Extended expansion connector...

Post by Nasta »

Oh, and another bit I forgot about.

BERRL was never used, it's only pulled up on the motherboard. In fact it cannot be used since no version of the OS actually provides a bus error exception routine, so pulling this low instead of DTACKL will either crash the machine or do nothing... although I would not be surprised if Minerva does something clever, knowing Lau Reeves...
Hence, the logical place to put a +5V power line (and just pull the existing signal up a bit harder :P ) is the pin formally used for BERRL.


User avatar
Peter
Font of All Knowledge
Posts: 2004
Joined: Sat Jan 22, 2011 8:47 am

Re: Extended expansion connector...

Post by Peter »

Nasta wrote:ISA (16-bit) has a few more ground lines interspersed, and the empty area is actually a bit smaller due to vertical connectors being used on a motherboard with 4 layers and decent buffering, not like the QL. Still it runs at about max 2x the speed legally and when the 16-bit extension is used - so it's not a huge improvement.
I didn't mean ISA as improvement, just as example to illustrate the high frequency return path should not be that critical for a QL style interconnect which has decent ground planes on both PCBs. Practically, the ISA bus did open much larger "inductor" areas, because it was used with terrible slot risers.
Nasta wrote:Today's components even if ran at QL clock rates are faster, and not by trivial amounts, so in many cases the bus was already ran faster.
I can only underline this, espcially as it's not a synchronous bus. With newest generation logic chips, The rise/fall times can easily be 30...50 times faster, producing much more crosstalk and over-/undershoot. I assume that Dave or whoever designs a new motherboard with QL+ connector would either use low slewrate chips anyway, or decouple with buffers. (By the way, using the Q68 FPGA with the QL bus would require a grave full of low-slewrate buffers, which takes away some of the charme of such an idea.)

On the other hand, if slewrate is kept in mind, I wouldn't see connector pinout as critical as Nasta. The fastest CPU I saw in recent discussions was the 68020, and usage of the extra bus features would require new board designs on both sides of the connector anyway. HC-something may in some cases still be critical with old QL mainboards, but offers a sufficiently low slewrate in our case.
Nasta wrote:Hence, the logical place to put a +5V power line (and just pull the existing signal up a bit harder :P ) is the pin formally used for BERRL.
BERRL could be used by a GoldCard-side CPU board to generate a double bus fault in order to disable the QL-mainboard-side CPU. Now if such a board is plugged in... ;)

(All I say about incompatibilities is not meant as really important. A new QL mainboard replacement in series production is not the most likely thing to happen, so it should have all possible freedoms to happen at all.)

What currently interests me more, is the existing QL mainboard replacement (Aurora) :mrgreen: I'm curious wether it cures the SGC related instabilities with QL-SD. ;) Nasta, do you still use an Aurora system with SGC? Or anybody else around here?


Nasta
Gold Card
Posts: 444
Joined: Sun Feb 12, 2012 2:02 am
Location: Zapresic, Croatia

Re: Extended expansion connector...

Post by Nasta »

Peter wrote:
Nasta wrote:ISA (16-bit) has a few more ground lines interspersed... used on a motherboard with 4 layers and decent buffering
I didn't mean ISA as improvement, just as example to illustrate the high frequency return path should not be that critical for a QL style interconnect which has decent ground planes on both PCBs. Practically, the ISA bus did open much larger "inductor" areas, because it was used with terrible slot risers.
Well, I have shortened my ofiginal sentence in the quote above since that's quite relevant to the case in point, but I do understand and agree with you. With ISA at least the buffering was relatively well defined, while on the QL it's either non-existent or a rather haphazard combination of all sorts of things. I'll get back to this a bit later on regarding SGC/Aurora
Nasta wrote:Today's components even if ran at QL clock rates are faster, and not by trivial amounts, so in many cases the bus was already ran faster.
I can only underline this, espcially as it's not a synchronous bus. With newest generation logic chips, The rise/fall times can easily be 30...50 times faster, producing much more crosstalk and over-/undershoot. I assume that Dave or whoever designs a new motherboard with QL+ connector would either use low slewrate chips anyway, or decouple with buffers. (By the way, using the Q68 FPGA with the QL bus would require a grave full of low-slewrate buffers, which takes away some of the charme of such an idea.)
[/quote]

Well, either that or a whole lot of RC termination - or bus clamps (like PCI). In any case todays components easily produce edge rates that could sustain 100MHz clocks, this limits the maximum length of lines to (theoretically) ~30cm but under ideal condition. In real life 10cm is more like it and that's with a good unbroken ground plane (And a non-resonant one, a whole different can of worms). In other words making untreated signals drive bus lines under these conditions falls somewhere between asking for trouble, luck and wishful thinking :)
The thing is, any bus one cares to implement (especially assuming it could support (some) old style boards) must be isolated from the CPU bus for reasons of speed, so buffering is a must. The real work comes down to chosing the right type of chip and (if needed) further conditioning the signal.
On the other hand, if slewrate is kept in mind, I wouldn't see connector pinout as critical as Nasta. The fastest CPU I saw in recent discussions was the 68020, and usage of the extra bus features would require new board designs on both sides of the connector anyway. HC-something may in some cases still be critical with old QL mainboards, but offers a sufficiently low slewrate in our case.
HC is fine (or rather HCT, but something like LBT/ABT equivalents if available may be better. In any case regular HCT can be made to work fine with some series resistors, and perhaps small capacitors on a few critical lines. Passing 68020 signals at 25-33MHz onto the bus may still require serious consideration of connector pinout. As an example, one can use PCI, which is originally 33MHz. Although a 68020 never produces a signal at 33MHz it does use the very same edge rates, and even though it's not capable of transferring data on each clock, when the relevant 33MHz clock cycle occurs, roughly the same parameters apply as for PCI - the difference being PCI is actually easyer to work with under the same general electrical conditions because it's synchronous.
That being said, I do not see a reason to put said signals on the bus (*) which certainly relaxes the requirements. And THAT being said, if there is a way to improve signal integrity, then it should be done anyway, assuming of course nothing radical is required.

(*) Things that truly require speed (which is the paramount reason why one would want to use a wide and fast bus) are rather few and mostly already need to be directly coupled to the CPU, hence unlikely to operate optimally on a bus unless there is no other way (i.e. better ad something and have it work at half speed and have it be feasible to make, than require a complete redesign of the system from scratch making it 3 times more costly in time and money and hence most probably it will never happen). However there is a number of relatively simple devices that require up to moderate speed, which could benefit from a faster if not wider bus.
Nasta wrote:Hence, the logical place to put a +5V power line (and just pull the existing signal up a bit harder :P ) is the pin formally used for BERRL.
BERRL could be used by a GoldCard-side CPU board to generate a double bus fault in order to disable the QL-mainboard-side CPU. Now if such a board is plugged in... ;)
Actually no because /HALT and /RESET are tied together on the motherboard. Tying BERRL will lock up the CPU into performing a loop with two consecutive cycles (two are needed to produce two bus faults i.e. a double bus fault), after which it asserts /HALT and because of the above connection inadvertently resets itself and restarts the process. Even if this was not the case, at least two bus cycles would be performed after reset to produce the double bus fault.
The right wayis to simply tie BRL to ground - which is what GC/SGC do and indeed bus side CPU cards should do if the 68008 on the motherboard is not removed physically. Under these conditions the 68008 arbitrates itself off the bus during it's reset sequence and before fetching the initial PC and SSP - in other words it does not do a single bus cycle, the bus becomes tri-state as soon as the power supply is sufficient for normal operation. Consequently, neither GC or SGC monitor the BGL line, it becomes low very soon after reset but the bus is already tri-stated from the start and is safe to use without paying attention to BGL.
(All I say about incompatibilities is not meant as really important. A new QL mainboard replacement in series production is not the most likely thing to happen, so it should have all possible freedoms to happen at all.)
Agreed, and the more reason the design is done right :P
What currently interests me more, is the existing QL mainboard replacement (Aurora) :mrgreen: I'm curious wether it cures the SGC related instabilities with QL-SD. ;) Nasta, do you still use an Aurora system with SGC? Or anybody else around here?
Although i have both, they are in storage, I haven't used them for a long while.
BTW the problem with ground plane cut-out because of J1 pinout first became apparent with Aurora while it was still in prototype stages, specifically when two such cases are concatenated as in the case of a SGC plugged into Qubide, which in turn plugs into Aurora. The problem is actually exacerbated by the SGC which uses HC chips to buffer the bus, with no termination, and the clincher is the Qubide, again with no termination. Somewhere along the lines ground bounce and ground loops become a problem, the best point to connect a power supply in this case is on the Qubide, especially if the same one is powering the hard drives. In my case everything was stable, but I didn't use anything on the ROM slot so YMMV... Perhaps ask someone who uses RomDisQ on the Aurora?


User avatar
Peter
Font of All Knowledge
Posts: 2004
Joined: Sat Jan 22, 2011 8:47 am

Re: Extended expansion connector...

Post by Peter »

Nasta wrote:With ISA at least the buffering was relatively well defined, while on the QL it's either non-existent or a rather haphazard combination of all sorts of things. I'll get back to this a bit later on regarding SGC/Aurora
Absolutely. Whenever I talked about the extended connector pinout, it was under the (achievable) condition of appropriate designs on both sides of the connector. SGC/QL/... is a different story. Even a directly connected SGC without further expansions generates illegal logic levels on a QL mainboard.
Nasta wrote:Well, either that or a whole lot of RC termination - or bus clamps (like PCI).
Actually I already use PCI clamps plus series resistor with the same FPGA. For the dual purpose of 5V tolerance and line termination. (The chip offers internal clamp diodes on some of its pins, so only resistor needed) But if there are many lines, buffer ICs are easier to solder than resistors.
Nasta wrote:In any case todays components easily produce edge rates that could sustain 100MHz clocks, this limits the maximum length of lines to (theoretically) ~30cm but under ideal condition. In real life 10cm is more like it and that's with a good unbroken ground plane
To add a practical example, I kept the lines for the 133 MHz bus on the Q68 under 6cm, because not only signal quality was an issue, but timings were also extremely tight, and I'm using a two layer board. (Distance from line to plane larger than on a multilayer). Perfect GND plane though, and the FPGA allowed to place the pins exactly where they were ideal for routing.
Nasta wrote:Passing 68020 signals at 25-33MHz onto the bus may still require serious consideration of connector pinout.
From practical experience, I'd say 25 MHz will be save without pinout considerations, just good layout on both sides. 33 MHz gets into a region where I'd start to think twice and calculate a few things.
Nasta wrote:Actually no because /HALT and /RESET are tied together on the motherboard.
Hehe, very good. :D Even if that was a Q60 detail, I would not remember anymore, let alone the QL.
Nasta wrote:BTW the problem with ground plane cut-out because of J1 pinout first became apparent with Aurora while it was still in prototype stages, specifically when two such cases are concatenated as in the case of a SGC plugged into Qubide, which in turn plugs into Aurora. [...]
Far worse conditions in many regards. You really went through the fire.
Nasta wrote:Perhaps ask someone who uses RomDisQ on the Aurora?
Would be a first indicator, but the RomDisQ is less critical than QL-SD. It uses an old PLD with 5V levels and low slewrate.


Nasta
Gold Card
Posts: 444
Joined: Sun Feb 12, 2012 2:02 am
Location: Zapresic, Croatia

Re: Extended expansion connector...

Post by Nasta »

Peter wrote:
Nasta wrote: Well, either that or a whole lot of RC termination - or bus clamps (like PCI).
Actually I already use PCI clamps plus series resistor with the same FPGA. For the dual purpose of 5V tolerance and line termination. (The chip offers internal clamp diodes on some of its pins, so only resistor needed) But if there are many lines, buffer ICs are easier to solder than resistors.
Yes, and especially if you can choose relatively low slew rate ones. BTW I could never figure out why those SO resistor arrays are so expensive, often more than a buffer. Fortunately the 1206 ones are cheap but they can be a bit stubborn when hand-soldering :P
One experience I will share that may be of use - sometimes it's not a good idea to use internal clamps unless the input series resistors are of rather high value. It goes the same for logic, CPU, memory - the internal core on these is fairly low current compared to what the IOs can do and clamping a strong external signal (or clamping produced by ringing of an output) can sometimes severely upset power and ground inside the chip. Not to go too much into details but I actually had to use resistive dividers on input pins to interface some 5V logic to a 3V chip because of this, and even add input capacitors to ground.
Peter wrote: To add a practical example, I kept the lines for the 133 MHz bus on the Q68 under 6cm, because not only signal quality was an issue, but timings were also extremely tight, and I'm using a two layer board. (Distance from line to plane larger than on a multilayer). Perfect GND plane though, and the FPGA allowed to place the pins exactly where they were ideal for routing.
Similar to my experience with 100-166MHz on a two layer board, and I would say a very good example to show that in theory there is no difference between theory and practice but in practice there is :P
Peter wrote: From practical experience, I'd say 25 MHz will be save without pinout considerations, just good layout on both sides. 33 MHz gets into a region where I'd start to think twice and calculate a few things.
Exactly. But I am anticipating some bad layout when some old boards are connected which may well be necessary to get a working system to do development on. Actually, having combed through the QL J1 layout and comparing to what I know about what is used and what not it may well be possible to get sufficient integrity based on the existing pinout. Some pins can be actually connected to ground along J1 because peripherals should not use these anyway so that can be used for future stuff that needs to run faster - much faster than regular QL, but consider I am talking relative terms here since the regular QL is really not that fast. To do something like fast 32-bit (even multiplexed so less pins), one needs to go a step further. A lot of signals toggling at the same time and potentially quite fast, generated by fast logic. However, that part of the spec would be optional.
Nasta wrote:Actually no because /HALT and /RESET are tied together on the motherboard.
Hehe, very good. :D Even if that was a Q60 detail, I would not remember anymore, let alone the QL.
Well, don't give me too much credit, some time ago when Dave first proposed something with a 68020, I looked at an implementation of a simple DRAM controller that would use bus cycle retry (BERR + HALT) when refresh is required, and then asked myself why something like that could not be done on the regular QL - and found out :P - no HALTL on J1 cause it's internally joined with RESETL.
Even so this gets complicated on a regular 68k because HALT needs to linger one cycle after BERR so lots of special case logic, making the whole design a bit too complex. Hence, this was relatively fresh in my mind :)


User avatar
Dave
SandySuperQDave
Posts: 2778
Joined: Sat Jan 22, 2011 6:52 am
Location: Austin, TX
Contact:

Re: Extended expansion connector...

Post by Dave »

Great discussion. :)

I will go through the details of what was said and also use information sent directly by Nasta to create a new infographic of the updated proposal during today.


User avatar
Dave
SandySuperQDave
Posts: 2778
Joined: Sat Jan 22, 2011 6:52 am
Location: Austin, TX
Contact:

Re: Extended expansion connector...

Post by Dave »

Sorry, I haven't got the new info ready to post yet. I have been working on components and fighting fires with Gold Card battery replacements. Some rejigging to do, but everything's moving positively.

Now, I just need to find an economical PCB house, because the one I used to use has been taken over and completely overhauled their pricing structure, doubling prices.

Any recommendations out there? I like 2-week turn+ on 4 layer boards, lead free/RoHS, no testing, just the facts ma'am!


User avatar
1024MAK
Super Gold Card
Posts: 592
Joined: Sun Dec 11, 2011 1:16 am
Location: Looking forward to summer in Somerset, UK...

Re: Extended expansion connector...

Post by 1024MAK »

Regarding power pins...

The existing expansion connector has VIN (+9V nominal)(three pins), VP12(+12V) and VM12(-12V) only.

Switching regulators are much better than the common 7805. They normally would not need heat-sinks even if the VIN was a regulated +12V.

However, if users are using a typical SMPSU, these have the +5V (and +3.3V) lines as the high current outputs, with the +12V being a lower power line (and less well regulated).

Also keep in mind that pushing a high current through only one connector pin is not a good idea, mainly due to voltage drop.

Existing cards mostly assume that VIN is a nominal +9V. The design of the UK Sinclair QL PSU is that of a voltage limited / current boost design using a thyristor (SCR) to switch in extra windings when the +9V output drops. This limits the voltage in normal use (compared to a simple smoothed non-regulated supply).

If VIN is made to be a +12V line, some existing cards would run far hotter than some people would be comfortable with.

Now if a typical SMPSU is used, it is possible to use a switching regulator to generate a +9V supply (without needing a large heat-sink) from either the +5V line (boost/step-up) or from the +12V line (step-down).

So do we go with Dave's idea (only one extra power pin, a single +5V supply) that supplies only low power cards. With power hungry cards getting their power from the VIN pins (+12V).

Or do we have a completely separate power connector, which can supply a high current +5V supply plus a high current (low impedance) 0V / ground connection.

Personally I would rather see no extra power pins on any extended expansion connector, instead have as many 0V / ground pins as practical. Low / medium power requirements can be met by having on board switching regulator(s) supplied from the VIN pins (+9V or +12V, with +9V being preferred if older cards are in use). High power cards should have their own separate power connector (to be supplied by a directly from a suitable power supply (+3.3V &/or +5V).

Mark


:!: Standby alert :!:
“There are four lights!”
Step up to red alert. Sir, are you absolutely sure? It does mean changing the bulb :!:
Looking forward to summer in Somerset later in the year :)

QL, Falcon, Atari 520STFM, Atari 1040STE, more PC's than I care to count and an assortment of 8 bit micros (Sinclair and Acorn)(nearly forgot the Psion's)
User avatar
Dave
SandySuperQDave
Posts: 2778
Joined: Sat Jan 22, 2011 6:52 am
Location: Austin, TX
Contact:

Re: Extended expansion connector...

Post by Dave »

With it seeming more and more likely that no existing QL card would be plugged into QL2, I don't think this is too much of a problem.

Each pin in a DIN41612 conector is rated for 2A. I was taking the view that it is trivial to replace the 7805 on old cards with a copy of the 5V SMPS in pin-compatible format for a few pounds, or there are swap in higher efficiency equivalents these days, but not 95%+ efficient like my design.

I do think it needs more thought.

I am currently favoring the idea of smaller, pre-decoded expansion slots with a 16- or 32K addressable area, plus an SPI interface. If the slots are empty, then RAM can be enabled in the empty space.


Post Reply