RFC: QL interim project - Peach Keen Dimm 2-A

Nagging hardware related question? Post here!
Post Reply
User avatar
Dave
SandySuperQDave
Posts: 2765
Joined: Sat Jan 22, 2011 6:52 am
Location: Austin, TX
Contact:

Re: RFC: QL interim project - Peach Kinn Dimm 2/A

Post by Dave »

I think this sounds like a well thought out project.

The main advantage I see is replacing the pre-existing custom logic with something documented. I would surely buy or build one of these!

All I ask is that it has some form of standard expansion.


User avatar
Dave
SandySuperQDave
Posts: 2765
Joined: Sat Jan 22, 2011 6:52 am
Location: Austin, TX
Contact:

Re: RFC: QL interim project - Peach Kinn Dimm 2/A

Post by Dave »

I would much rather help you do a more advanced project, even if it's just cheering from the sidelines due to my limited knowledge.

The important thing is that it gets finished and built, whatever it is :)

"A bird in the hand is worth two in the bush."


User avatar
Dave
SandySuperQDave
Posts: 2765
Joined: Sat Jan 22, 2011 6:52 am
Location: Austin, TX
Contact:

Re: RFC: QL interim project - Peach Kinn Dimm 2/A

Post by Dave »

Here's the intended spec I was working with:

CORE:
680X0 running at an equivalent 8x QL speed or better in first iteration, 100x speed or better in later iteration (which would require a 68k in FPGA)
4MB+ of SRAM
256KB of FLASH, which would include tools to select ROM versions, toolkits, SMSQ, etc, and allow switching/flashing of new ROM versions as needed/desired.

STORAGE:
WD1773+ compatible floppy
ATA IDE
Compact flash (offshoot of IDE)

INTERFACES:
10/100 Ethernet
2x Serial and 1x parallel
USB if possible, for keyboard

VIDEO:
non-interrupt: using dual port SRAM and a custom VIDC chip (FPGA again) to recreate standard and some extended modes, with more modern output format

EXPANSION:
An extended expansion port

This is the "ideal" spec I came up with.

As you can see, it requires a good chunk of FPGA work, which is why I have signed up for a couple of courses at the local community college ;)

This is the basics most people have asked for. I have divided them into primary goals and secondary desires, with all the heritage interfaces (SER, PAR) being secondary, and updated video being primary.

Knowing what I was trying to implement was very important in working out how I wanted to proceed, so a discussion of that might be a good place to start. :)

What do you think is a must-have and what a mere "nice feature"?


Nasta
Gold Card
Posts: 443
Joined: Sun Feb 12, 2012 2:02 am
Location: Zapresic, Croatia

Re: RFC: QL interim project - Peach Kinn Dimm 2/A

Post by Nasta »

Brane2 wrote:@Dave:
WRT to CPU speed:
non-waiting 68000 should be on average 60% faster than 68008. But with QL's heavily braked 68008 this is more likely 100% or even more.
So 68000 at 32MHz might well feel 10-times faster, if not more. And with SEC version, there is a good chance that ti might work even significantly faster, even on 3V...
WRT to the rest of the system
(...)
This is why it makes no sense to use it for anything besides compatibility portal. I would go with this as far as support for typical cheap 17" LCD at 1280x1024 with 4bp ( and perhaps a byte) with option for a native 512/256x256 with a border.
68020 is better still, not dramatically so but sufficiently.
It has 3 advantages over a standard 68000:
1) 2x wider bus operating on a 3 cycle/transfer basis, which can dynamically be sized down to 16 or 8 bits very simply.
2) Write-back buffer and prefetsch buffer that lets the CPU core move on when data is written to the bus or execute short loops directly from the prefetch buffer without the actual need to fetch them via the external bus, which signifficantly increases data transfer if you write your routines in assembly. This is with cache disabled.
3) A small but efficient instruction cache, which is the simplest to support without problems with most existing programs (self-modifying code being the obvious exception (*)). even though the cache size seems pathetic, it caters well fro QL style programming, most of the important stuff being assembler.
It also has a disadvantage:
1) Not completely compatible due to a separate interrupt stack and slight differences in the formats of some stack frames. However, it's the simplest of the 68000+ family to get bog standard 68k compatibility on it to the largest degree, the SGC being the obvious proof to this fact.

(*) which is a no-no as far as the Ql is concerned, in general.
Ql program code is expected to be position independent and fully re-entrant which completely rules out self-modifying code (and incidentally, makes if possible to run everything from ROM).
[USB, keyboard, mouse]
Polling is not a big disadvantage as far as the OS is concerned if the USB hardware support makes it fairly simple and not prone to deadlock (as in someone removed the USB device, lets wait an unreasonable amount of time in a loop with all interrupts disabled in hopes it will respond to the poll).
Almost everything on the QL is actually polled, either at scheduler loop speed (quick if the machine is just waiting for the user) or at least at polling interrupt (20ms) speed (unless something is going on that requires all interrupts to be disabled).

One advantage of a 68k+ CPU here is support for the full 7 interrupt levels. Not because we'd want to use vectored interrupts or some such complicated stuff, but because certain levels of interrupt can be assigned to specific things like block data transfer, and by this I mean as a OS resource. Here things are pretty much free to be defined since no such thing existed so far. Adding extra trap calls is not a problem, implementing interrupt service lists (linking and unlinking in the usual manner).
An example would be a system resource to transfer X amount of data from address A to B, with routines that cater for incrementing A and/or B and linked code to test for a condition, like for instance FIFO full in some piece of hardware. The actual transfer routine is implemented by the OS, the 'user' has to link the peripheral onto the interrupt line, link in the appropriate interrupt condition test code, specify X, A, B and options. The OS then provides a low level hardware service routine that pumps data in or out of memory, maintained by higher level driver code.
Another similar use would be a fast periodic interrupt that is used for system timing, with a well know period that is of smaller granularity than the usual polling interrupt. The OS could then provide timing services via this interrupt. The point being, the interrupt structure / priority is defined by the OS, NOT by the user (to maintain things in the RTOS domain).
So far I had in mind for a "mule":
- 68SEC000FU20 working on 3.3V
- cheap 3V FLASH 70ns. Something like 1 or 2 MB
- maybe 4MB of video RAM.
WRT memory organization and (mentined later) DDR2 RAM - better stick with SDR. It is still available and will remain available because it's simpler to implement in small systems due to power and impedance control issues. Espacially for this 'mule' there would only be disadvantages in using DDR because it's simply too fast and manufacturers will NOT guarantee operation at slow speeds. This is one odd fact I have found during the development of the ill-fated GoldFire. Just recently I've been involved in another project where the same scenario happened with a DDR2 part. Unlike SDR, modern DDR2 has internal PLL-like structures to compensate for internal delays and optimize setup and hold times. It took nearly a month of back-and-forth to get the manufacturer to check if this could work on a slow bus, and no - it does not.
DDR2 and even DDR has no real advantage here even if the RAM was also being used as a frame buffer, and it's not easy to get in 'small enough' (!) sizes. Remember, the 68EC020 has only 16M of address space.
It is worth mentioning here that all synchronous RAM (static or dynamic) is actually best suited for burst accesses, and the reason it was created in the first place, is the emergence of CPUs that use burst accesses (usually 4 transfers per burst). This was implemented in the 68k family since 68030 onward, so with 68020 we are 'stuck' at emulating normal random access, i.e. a 1-transfer 'burst'. Even at the fastest official 68020 clock rate, a SDR DRAM can do this in 3 cycles, but in reality for the 68020 it would need at least 4 because of the actual time the address is supplied and data is expected. But, today the slowest SDR RAM you can find operates at 133MHz, so in effect every 68020 memory cycle can last up to 12 cycles of the RAM clock speed, and within that one could put tons of interesting things, including say, a 4-cycle burst of reads to fetch data for a screen, and have enough time to cater for any 68020 bus access at full speed.
If there is a FPGA in the system, designing such a controller should not be a great problem.
A more battery friendly but also a lot more costly version would be the use of SRAM.
Regarding Flash - this should really be thought of primairly as bulk storage. Executing code directly from it would be a LOT slower than from RAM, although it offers write protection by default.
Given the relatively small address map, I would go for something like a paged FLASH system where one small part of the FLASH ROM is permanently present somewhere in the memory map, and is basically only used to start up the system (and quite possibly this also implies actually shadowing that memory area with RAM for faster access). The rest is treated as storage. That being said, given the ability of the 68020 to dynamically size it's bus, one could use a simple 8-bit FLASH at the maximum affordable capacity one could find. Since it's going to be copied into RAM the interface width is not really a problem, but an advantage - less lines to route and less chips to power. In fact, one could even use the unused FC line codes to access it in the same address map without interfering with anything else. Alternatives would be serial FLash or perhaps even a SDC card in SPI mode.

To sum it up:
Implement the whole memory map as RAM, then 'punch holes' in it where you want your boot FLASH and IO to be. THings are generally much more flexible that way. Of course, a 'DSMCL' style control line may be provided to do clever things with peripherals and RAM such as shadowing.
- cheap Spartan6 in TQFP pack or slightly bigger smallest BGA ( 256 balls). Used in start just for simple logic and upgraded gradualy.
BGA means multilayer board, and a good pick-and-place + oven facility. Although I know some people who have done it in the oven at home, if you want reliabllity, it needs to be done professionally. Also, if it doesn't solder well, you can pretty much kiss the PCB goodbye unless again you have access to professional equipment (or a good oven? :) ). TQFP can be soldered (and unexpectedly easily) at home with a soldering iron with a tip barely smaller than a shovel :)
- MIcrochip's MIPS with USB and Ethernet interface ( and SERial ports and parallel port, spi, I2C, , PWM for native "sound", etc etc)
- materializing the logic for floppy interface in FPGA-
- basic IDE, no DMA etc stuff - at least in start.
- USB/PS2 for mouse and keyboard
- fast SPI for QL's native keayboard with beeper and microdrives
- interface for SDD and CF for simple data moving between mule and PC
- microdrive and native net interface, if feasible.
- battery support. Not just for RTC, but whole machine.
OK, lets just forget microdrives, period. They are quite frankly a shameless and rather unique way of exploiting Murphy's law, they work only because it's the worst fate that could happen to them.
Floppy interfaces are a problem initially since most of the software is on floppies, but today the capacity is trivial and one could well solder a CF card or something onto the board FOREVER and pretty much not run out of program file storage.
Supporting a 'native' QL keyboard is actually an advantage. This sort of system, as it has been said, should not be viewed as a QL replacement, but as a 'useful box that can do many things and is simple to program'. This does on occasion reduce the thing to a literal 'box' with a simple keyboard of a few keys and, say, a small LCD. Making this simple to achieve is a bonus.
Regarding the native network, it's actually nothing more than a simple serial port at a constant bit rate, and the rest (protocol and related timing) is software. It is based on a block transfer, so in theory the same protocol could work over all sorts of different media (CAN comes to mind, as does infra-red). There was once something called FastNet, which was based on an old UART chip with a hardware FIFO added, it used a modified 'net' driver. Same author as Qubide :)
Regarding support of old style IO resources, the only one that might be of use is the RTC counter. This could well be implemented in software on a second smaller microcontroller (PIC?) that offers external CPU access via some sort of shared memory or similar mechanism.
In fact, a long time ago I thought up a small board with it's own 68008, SRAM, a bit of hardware dual port RAM (2k if I recall correctly) and initially a fast dual UART. Later on the idea was expanded by replacing the dual UART with a PC style multi-IO chip, primarily to implement serial ports, keyboard, mouse and parallel port. The board also had an EPROM for the QL to recognize, that would hold code to start up the board. The extra 68008 side actually had no ROM at all and the idea was to fill up the dual=port RAM with start-up code from the QL side, then start the other 68008 and load the rest of the code from the EPROM on the QL side or from a file through the dual port RAM. Once the code was loaded, the DPRAM would be used to communicate with the other 68008 as a sort of mailbox/FIFO, while it would be doing it's thing sorting out the peripherals and reducing their flow of data to simple streams the QL could use. This sort of thing should be MUCH easier to do today with microcontrollers essentially having all that hardware and more on a single chip.
- instead of discrete paralel FLASH chips use serial FLASH for FPGA configuration and copying the user data into RAM automatically at boot at FPGA.
See note about FLASH, having a FPGA on-board opens up many new ways of booting the machine.


Nasta
Gold Card
Posts: 443
Joined: Sun Feb 12, 2012 2:02 am
Location: Zapresic, Croatia

Re: RFC: QL interim project - Peach Kinn Dimm 2/A

Post by Nasta »

Brane2 wrote:@Nasta:
Very nice ideas, but I don't think you have noticed stated mission of the machine. I thought that naming it by Slavic god of simplicity would explain everything... :mrgreen:
Actually I missed the 3.3V compatibility issue and that is, indeed, an issue, especially interfacing to modern chips.
Purpose of this machine is to be optimised for the roles as:

1. Compatibility platform.
Which means that it should be able to run as many original program as possible correctly. It should be able to do data recovery from old mediums as far as possible. For microdrives this means recreation of microdrive boards with modern version. Floppies are less of an issue.
Under this point, speed is an issue just if original program is unbearably slow. IOW, it is just a cushion to make original programs more responsive and likeable.

2. Migration platform:
All that access to old data means nothing if it can't be transformed in more useable, modern form or at least packaged in modern containers.
So, besides old interfaces, this mule has to have at least some of modern interfaces to be able to migrate data, if for nothing else.
I have just last night made a trip to cellar to find if I have oen of those colour Triglav monitors ( "that we took in Iskra in the good old days"). I've found couple, but boy do they look crappy today.
Even cheapest LCD totally owns them. New machine has to have efficioent (at least) DVI connectivity, USB and PS/2 are a must. etc etc.

3. Testing platform ( toward developing PKD-3)
Mule has to be useable for testing and hacking purposes. It has to be able to show possible problems with old SW on new HW and possibly vice versa.
Also, QL is and always was ideal machine for testing concepts, behaviour of some equipment etc etc. It was always very doable to solder on a couple of registers and latches and make yourself chip programer, motor controller etc.
CPU speed here could be used to good purpose, but it has a limited potential. Mule's CPU will never reach speed of final solution, so anything that really requires it wil have to wait. OTOH, for simple bitflipping control, even 68000 should suffice.

In addition to this three roles, machine should be relatively simple, small and cheap. And open it the sense of perpetually ( or at least in the long run) developing project. I don't think that grafting TheNextGeneration (TNG) machine concepts on the working mule would give good results.

...

WRT to 68EC020- nice idea. But everything at least Farnell carries under that name works on 5V...
I can't find 3V models even on Freescale. Which means headaches.
...
Also, as a part of testing poligon, this working mule might contain somewhat beefier FPGA, so one could start using cheap CPU 68000 then at the later time implementing CPU inside FPGA and deactivating external one. And even that would note be meant as a critical milestone on the road of transofrmation from donkey to horse but as test that would be applied on real machines that are to follow.
Yes, in this case speed is not as important as ease of use with components that would be carried over to TNG QL (or whatever you want to call it). Especially if it relies on some extra features of the FPGA, and the aim is to (gradually?) replace the CPU with a FPGA implementation, then it follows that the actual 'mule' CPU would be connected to everything else through the FPGA, in which case 3.3V compatibility is a BIG advantage. This sort of design also has fringe benefits, in all probability it would let the CPU run at a much higher speed since it's IO is almost unloadad, and the FPGA is used to synch all signals so the logic within can present as near ideal signal conditions to the CPU as possible.
WRT to USB polling, I see it more of a problem when fast reaction on the device event is needed but time of mevent is unknown in advance.
Then host has to do high speed polling, which can be awkward. There could be applications that would value microsecond response time, which is practically impossible with USB.
Hehe no microsecond responses with a 68k at 20MHz :) but it's really not needed, even if it was possible with USB. For QDOSMSQ implementations it all comes down to how the driver is written. The OS itself has a number of RTOS features, so it can deal with responses not coming within a time-alloted interval. Be that as it may, this is step >$10h in the complete story and perhaps that crossing that bridge should be planned when it's actually reached.
WRT to DDR2/3 problems, don't modern FPGAs come with support for it ?
I seem to remember Xilinx claiming 800MHz DDR-2 for SPartan6. This shouldn't be so very slow. I remember running RAM sticks even slower for some time in my PC
IF the FPGA in question supports a direct interface (has an embedded controller) it would be stupid not to use it. My remarks were geared towards running the RAM in synch with the CPU. If there is a buffered controller inbetween, it's not an issue. Of course, if simpler FPGA is used, building a controller within it requires resources.
Regarding DDR2 specs, there can be surprises. If they are run within parameters used in most applications (read: PC motherboards) then it should all work. Relying on some of the more obscure parameters can lead to problems, and (oh, surprise) you might be surprised how many manufacturers didn't even test for this even though it's right there in their own datasheet. BAsically, copy-paste documentation, and since only 1% of users complain (easy if 99.5% are on the level of 'stick it into the PC and go), why bother correcting or even testing for it.
Side story: if you have a SDR DRAM capable of running at latency 2 @133MHz, you should be able to run it at latency 1 @66MHz. There is even a respective combination in the set-up register. 'Surprisingly' most manufacturers don't know what will happen if you use that setting. One even offered to go and look into the VHDL code for the internal logic :) and found that there is no defined logic for that case, but someone else wrote the code 'a long time ago' so they have no idea and did not test for that condition. In the real world, you might get latency 1, latency 4 or a non-responsive chip.

One more thing:
Caching on a 68k is 'interesting' in a CPU or generally OS that is not aware of it. Generally the safest approach is a simple unified memory cache that just mirrors the contents of the memory, not asking too much about what they actually are :)
That being said, the 68k CPU in question is orders of magnitude slower than the DDR2 RAM, so it would probably be better to use FPGA embedded RAM as a buffer for video data read in (longer) bursts from the DDR2.
WRT to Flash: I was thinking about using serial flash for FPGA configuration also as program ROM, when FPGA once initializes, simply because it seems to be the cheapest route. You do need FPGA configuration FLASH, and it is of the size on the order of megabits anyway. So you could use 2x bigger and upper half for QDOS etc. One component with dual purpose. And these serial FLSH chips tend to be cheap and small. And not that slow. I remember seeing one that had 4-bit interface, clock up to 133MHz and DDR action.
Agreed. I would normally argue that parallel FLASH is still cheaper (just went through a design where we decided to include fairly large amounts just for that reason), but since you need to have a serial configuration FLASH and those are up to 128Mbit readily available, with half and full gigabit units just about to come out, in small 4 or 16 pin cases, it's simple, relatively cheap and very easily expandable - serial FLASH has no address lines, so basically you could put any capacity into the smallest case, as long as the piece of silicon itself fits the case. Even at single-bit width they routinely run at least 33MHz and often 2-4x that clock rate. Regarding the FPGA it's only booted once per power-up, and loading (comparatively tiny) QL code even through 1 bit at 33MHz interface makes it 2x faster than anything available on the actual QL and that includes all possible expansions.
WRT to BGA work and rework: I have small IR oven for lab work and practically free access to big oven ( hot air) in production...
WRT to TQFP vs BGA- yes, TQFP is nicer to solder and design, but it is slower and doesn't have many pins. It tops out at QFP160 or so while BGA3xx are and even BGA48x models are relatively accessible. I thought about using BGA chip only to connect central pins to CPU, placed under FPGA on the solder side.
Again, recent experience:
TQFP can be found in up to 208 pins and in some rare cases 260 pins, but from the standpoint of a gicen PCB geometry, it's NOT easyer to design for, compared to BGA, as long as we are talking 1.27mm pitch BGA or thereabouts. This may sound like a paradox but it really isn't - the tight pitch of the TQFP pins around the edges of the case makes the vias used to connect the pins to relevant layers a critical part of the PCB design, as they are normally woder than the pin pitch. As a result, they actually get in the way of signal lines and themselves. A typical design will then look like an extension of all the pins to all sides, making the internally longer wiring even longer on the outside, the extension needed only to spread out the pins sufficiently so that the required number of vias can be placed. It's not too bad even on 2-layer boards for up to TQFP100, but over that (unless it's a large-pitch chip) one needs multilayer. But even then, at TQFP160, 208, 260 (which are all the same size just with tighter pin spacing) getting things onto 4 layers with reasonable length wires becomes a challenge, which can be somewhat simplified by putting decoupling capacitors on the oposite side of the board under the chip. At this point we are long gone into territory where BGA has an advantage - as long as one keeps in mind that you can get to (on average) 2 'rings' of BGA pins per layer. LArger pitch and smaller geometry can shift that to 3 rigns of pins per layer but then you are at a loss about how to route those lines where they need to go re vias again. BGA has also the shortest possible path to power and ground planes, which also may simplify heat transfer issues.
Soldering the CPU under the FPGA and connecting directly to balls of the FPGA BGA package over it is a very neat idea, as long as you get enough space for the decoupling caps which normally go there :)

Yeah, but this working mule was meant as a complete replacement of original QL. So, when you find on local flea market cartridge with cool but extict program, how would you read it on the PKD-2A ?
True, though one could argue someone somewhere must have it on floppy :)
And yes, floppy is a joke, but you need a way to be able to recover old data from such medias. Same with old "net". Suppose that you find some machine with some obscure interface that you can't connect to expansion port of the mule. Wouldn't it be nice to be able to transfer data through the "NET", no matter how crappy it is ?
There was also SERNet, same protocol via serial ports. This is certainly something to consider since the serial port can also be used to transfer data (slowly...). In fact, having decent and well supported serial ports on this macbine may be a great advantage, since lots of equipment needs it and on the PC they are becoming extinct, with the only support left through SUB (which can be tricky sometimes).


Nasta
Gold Card
Posts: 443
Joined: Sun Feb 12, 2012 2:02 am
Location: Zapresic, Croatia

Re: RFC: QL interim project - Peach Keen Dimm 2-A

Post by Nasta »

A further thought based on recent experience...
Monitors are getting stupider by the day, and there's no two ways about it. This is a rather important problem for a vintage machine or one intended to emulate it. Whatever video interface one wants to build for it, must be able to drive a common LCD monitor in it's native resolution, and today that is 1920x1080. In theory, VGA 640x480 should work, but as it turns out, in more and more cases, this is really only theory.
Just recently I bought a new 'high end' LCD display with LED backlight and IPS matrix. Brand name, too (though I'm not going to tell which one). Here are a few impressions:
1) Forget anything but native over HDMI, or DVI. Gets rather interesting even with a PC, as no BIOS startup is displayed at all.
2) Not only does it not display common resolutions via digital input, but it also lies about the signal. If it does not like it, it does not say 'out of range' or something like that, but NO SIGNAL and then goes into power saving mode blanking the screen. There is NO way to get it to tell you if there actually is a signal or not !!!
3) On one of 4 PCs it communicates scan parameters to the OC which it then fails to be able to use, even though they are well within it's own published specs. It says 'out of range' and shows horizontal and vertical frequencies clearly within range.
4) It will display any resolution (or at least attempt to) via the analog interface. However, in some cases it's impossible to get it to align the screen properly so some parts of the picture are missing or it's offset to one side to the point that there is a large black part visible. No amount of control twiddling will get it to work.
5) Results are better if one removes the I2C communication lines in the analog cable so the PC thinks it's a default monitor. In other words, this is a monitor that is literally incompatible with itself.

All of this is the result of complete micro$oftization of every computer product, as in no guarantee offered and who cares since you can't get another one that works properly any more. It's internal programming is the worst sort of 'cut and paste' from some development system where someone actually could not be bothered to support the most basic options the monitor chipset offers, because you can be sure it can actually scale any resolution to the native one of the LCD panel, but this has been blocked on purpose, in order to get you to buy a new PC, graphics card, etc.
I wish I could say this is some conspiracy theory, but unfortunately it's cold hard reality. In recent years I have been heavily involved with video on LCD monitors and there is a consistent trend of stupidifying the hardware by using crippled software. Before it used to be done to get you to buy a better monitor, but now even the 'high end' ones come close to being non-functional. Shamefully, 5 year old LCD monitors have absolutely no problem displaying pretty much anything.
BTW there are various workarounds to some of these problems, but you can never be sure if they will work on newer monitors. For instance, if you display 1024 vertical pixels of a screen on a 1080 vertical pixel area (in order to get the required native resoultion), it will NOT work. The monitor will assume the resolution is something x 1024 unless you provide dots or lines in the corners of a something x 1080 pixel area, and 'something' better be 1920. Even multiples do not work - 960 x 1080 double scan will end up shown but will fail to stretch to the edges of the screen. Something like this is actually impossible to do unless it's done on purpose! Also, 'highlighting' the native display area (say 1920x1080) slightly, i.e. adding a darker or lighter border around it, and using only part of the area for actual pixels will ALSO not work. It's frankly enough to make you want to bend over that little Chinese programmer over the knee and flog him over the ass until he can't walk or sit for a month, and continue that treatment until the damned thing works properly.


User avatar
Mr_Navigator
QL Fanatic
Posts: 782
Joined: Mon Dec 13, 2010 11:17 pm
Location: UK, Essex
Contact:

Re: RFC: QL interim project - Peach Keen Dimm 2-A

Post by Mr_Navigator »

Monitors are getting stupider by the day, and there's no two ways about it. This is a rather important problem for a vintage machine or one intended to emulate it. Whatever video interface one wants to build for it, must be able to drive a common LCD monitor in it's native resolution, and today that is 1920x1080. In theory, VGA 640x480 should work, but as it turns out, in more and more cases, this is really only theory.
I have to agree here, I treated myself to a new monitor connecting to the interface for my QL video output and it doesn't work, It keeps changing its mind what signal it wants to accept, meaning none of the above. Connect my old monitor, no problem :(


-----------------------------------------------------------------------------------
QLick here for the Back 2 the QL Blog http://backtotheql.blogspot.co.uk/
Nasta
Gold Card
Posts: 443
Joined: Sun Feb 12, 2012 2:02 am
Location: Zapresic, Croatia

Re: RFC: QL interim project - Peach Keen Dimm 2-A

Post by Nasta »

Brane2 wrote:
Nasta wrote: Just recently I bought a new 'high end' LCD display with LED backlight and IPS matrix. Brand name, too (though I'm not going to tell which one)
Why not ? It would be useful to know ? I for one am planning about renewing my monitor trio ?

BTW- was it Dell U2412M ? :P
NO, a LG LED backlit model... but I have since found a few other ones (probably the same scale chip and same half-baked software) that do exactly the same thing.
I've been looking for something with more than full HD resolution and I'm already in pain thinking what problems I'll have with it (most of these don't want to display anything but native resolution). Sadly, my trusted Sony 21" CRT is REALLY old and starting to have problems...and yet, there is still no replacement at any reasonable cost that can do 2048x1536 :P


Post Reply