Bad Apple demos

A place to discuss general QL issues.
User avatar
Cristian
Aurora
Posts: 962
Joined: Mon Feb 16, 2015 1:40 pm
Location: Veneto

Re: Bad Apple demos

Post by Cristian »

OK, I got it :-)


stephen_usher
Gold Card
Posts: 431
Joined: Tue Mar 11, 2014 8:00 pm
Location: Oxford, UK.
Contact:

Re: Bad Apple demos

Post by stephen_usher »

Cristian wrote:
mk79 wrote:Sure, with some pain you can do some music.
OK, apart from sound issues, what about the (theoretical) feasibility on real machine? The following "making of" might provide some ideas or inspiration:
https://www.youtube.com/watch?v=_j66Nu7BoCE
The screen update rate might be the biggest issue given how slow the QL's display seems to be. It might work better with a QDOS derivative which can take advantage of two screens so as to do double buffering.

The "Spectrum" version requires an SD card storage and effectively has pre-rendered video, compressed (I believe) as differential screen memory frames, i.e. only storing the differences in the Spectrum screen memory between frames. Even then It's just a few frames per second. On the QL you'd need a fast WIN device of some kind to do something similar.

Of course, the Spectrum screen lends itself more to this as it's effectively a mono bit-map with low-res colour overlay, which can be ignored for this demo. That's not the case for the QL, which makes for more memory writes and more processing.


stephen_usher
Gold Card
Posts: 431
Joined: Tue Mar 11, 2014 8:00 pm
Location: Oxford, UK.
Contact:

Re: Bad Apple demos

Post by stephen_usher »

The STE version uses extensive use of the Blitter and only works on that machine. Again, it's video that's using a custom codec optimised for the hardware, in this case the Blitter. It's also dependent upon the version of TOS as I don't think I managed to get it to work under TOS 2.06, only 1.62.


User avatar
Peter
QL Wafer Drive
Posts: 1984
Joined: Sat Jan 22, 2011 8:47 am

Re: Bad Apple demos

Post by Peter »

Cristian wrote:What about the feasibility with a (not emulated) QL? I think also the sound would be a big problem...
It's surely feasible on Q40, Q60 and Q68. In the most simple case, one would just write a program which copies uncompressed MODE 4 screens and sampled sound data from SD card. For <= 30 fps that is below 1 MB/s which all three machines can easily handle hardware-wise.

SMSQ/E would suffer from it's file size limitation and time consuming buffering, so the software would either run on "bare metal" or use tricks to work around this.


stephen_usher
Gold Card
Posts: 431
Joined: Tue Mar 11, 2014 8:00 pm
Location: Oxford, UK.
Contact:

Re: Bad Apple demos

Post by stephen_usher »

Well, assuming that you merely want a bitmap image and use mode 4 you could store half the number of bytes as a normal image as each byte of the 16 bit word storing a set of 8 pixels would be duplicated. If you add run-length encoding this would probably give a decent amount of compression. Alternatively not bother with the RLE compression, which may not work for a complex frame anyway as it could make the data size larger. This is how you could store an iFrame.

For intermediate (difference) frames, if fewer than half the 8 bit pixel blocks have changed in a line then you could have sets of block address (byte) and value (byte). If more than half the blocks have changed then the differencing actually makes the data larger and needs more CPU time. The last byte would have an address of 64 (the previous blocks having addresses 0-63) so a simple test of bit 7 will tell you if you've reached the end of a scan-line sequence.

As this would be on a per-line basis then the compression should be quite high. Also the number of memory writes greatly decreased.

Each line would be tagged with a starting byte to determine what line type it was.

Of course, if you want to simplify the coding then an iFrame is merely an intermediate frame where all the pixels have changed and hence every line is a complete set of 64 bytes.

Of course, it would be best if the pre-processing of the video was done on another machine as it'll need a lot of processing power. It would have to also process each frame rendering a 512x256 bitmap for each frame of the video.

How does that sound as a simple, QL-specific codec?


stephen_usher
Gold Card
Posts: 431
Joined: Tue Mar 11, 2014 8:00 pm
Location: Oxford, UK.
Contact:

Re: Bad Apple demos

Post by stephen_usher »

P.S. I'll see if I can create a proof of concept. I've just managed to build the XTC68 cross compiler on my Mac.


Derek_Stewart
Font of All Knowledge
Posts: 3953
Joined: Mon Dec 20, 2010 11:40 am
Location: Sunny Runcorn, Cheshire, UK

Re: Bad Apple demos

Post by Derek_Stewart »

Hi,

I would be interested to see your proof of concept from the assembler source code in C.

A better approach would be to convert the Atari assembley file: ba.s to a QL assembley format, but this would mean chnaging the way the programe works. As the QL does not have a Blitter chip.

Since the Bad Apple video is a sequence of graphic frames and synchronised music, I think there has been already something written in Superbasic - Goodtimes, that can display the graphics animation.

The music is in WAV format, so that could be converted to The QL Sampled Sound System or maybe SOX.

I have some Atari ST QL Emulator boards, but only on a STFM, which I do not think has blitter on it.

How hard would it be to interface a Blitter chip onto a QL?


Regards,

Derek
stephen_usher
Gold Card
Posts: 431
Joined: Tue Mar 11, 2014 8:00 pm
Location: Oxford, UK.
Contact:

Re: Bad Apple demos

Post by stephen_usher »

Well, this is how my codec should work. It's a generic monochrome mode 4 codec and relies upon other programs to have prepared the mode 4 screen dump files.

My expectation is that the player will render a frame (difference) and then give up its process time to the scheduler. Any music player would be a separate job, possibly with the video player giving a synchronisation message once every n frames.

At the moment I've extracted frames from an animated GIF for testing, using ImageMagick "convert" to generate XBM files (already byte encoded bit maps) and then converting them to QL SCR files. Ffmpeg can generate animated GIFs from video formats and do most of the resizing and down-converting of the colour. "convert" can do the rest, such as dithering. Why re-invent the wheel?

Anyway, here's the codec algorithm:

Code: Select all

Encoder for Sinclair QL specific monochrome, mode 4 video codec.

Given the name of a directory containing QL Mode 4 screen dumps with names in
the form (directory name)-(frame number).scr, e.g. "test-23.scr", combines
them into an encoded stream containing the differences between the current and
next frames.

The Sinclair QL Mode 4 screen is a matrix of pairs of bytes, each holding a
bitmap of 8 pixels. The first byte holds the "green" value and the second the
"red" value. There is no "blue" but the combination of "green" + "red" =
"white". We are only dealing with a bit mapped image (black or white pixels)
and hence we only need to store one of these bytes, halving the data. These
bytes containing 8 pixels (big-endian bit order) are called a "block" in this
program.

The QL's screen is arranged as 256 scan lines of 64 blocks wide, so we can
encode the location of the block in the scan line in 6 bits. Conveniently this
also means that we can encode the maximum number of changed blocks in a scan
line in 6 bits also, leaving the top two bits for other information, hence we
can encode each scan line thus:

(code byte){(data)*}

If the code byte has the "start frame" bit set then it starts a new raster
frame, it terminates the previous frame immediately even if the current
raster line isn't the last. This means that a frame containing no differences
to the previous frame can be encoded as just one byte and also frames which
don't contain changes in the lower parts can be terminated prematurely.

If the code byte doesn't have the "difference" bit set then the following 64
bytes contain (in big-endian bit order) a bitmap of the scan line.

If the code byte has the "difference" bit set then the bottom 6 bits contains
the number of changed pixel blocks. The code byte is then followed by pairs of
bytes, the first os the address of the block along the line (0-64) followed by
the value. The number of blocks to be changed can be zero, in which case the
raster line doesn't need to be modified.

During the encoding if more than half the blocks in a raster line have changed
it's determined that it's more efficient to write a complete raster line
rather than a list of the changed blocks.


stephen_usher
Gold Card
Posts: 431
Joined: Tue Mar 11, 2014 8:00 pm
Location: Oxford, UK.
Contact:

Re: Bad Apple demos

Post by stephen_usher »

Derek_Stewart wrote:How hard would it be to interface a Blitter chip onto a QL?
Quite a bit as it needs quite a bit of glue logic to work as requires a 16 bit memory bus.

Anyway, surely the aim here it to see if we can get something running on a stock (plus large data store) QL in the first instance?


User avatar
NormanDunbar
Forum Moderator
Posts: 2271
Joined: Tue Dec 14, 2010 9:04 am
Location: Leeds, West Yorkshire, UK
Contact:

Re: Bad Apple demos

Post by NormanDunbar »

vanpeebles wrote:I think you should be telling us a lot more about your AY equipped QL too :ugeek:
Yes please! In fact, isn't it difficult to get hold of AY-3-8910 these days?

I built a sound effects card using one of these, many yearsgo. From a magazine called Hobby Electronics I think. (Or, Everyday Electronics, or Practical Electronics!)

Cheers,
Norm.


Why do they put lightning conductors on churches?
Author of Arduino Software Internals
Author of Arduino Interrupts

No longer on Twitter, find me on https://mastodon.scot/@NormanDunbar.
Post Reply