|
Post by dshadoff on Dec 4, 2021 21:56:43 GMT
And in the end, I also ask myself, does Deflemask have any way to output MML ? Trackers and MML are just so fundamentally different, that you would lose a lot in the translation from tracker to mml. But the reverse actually isn't as much of an issue. Trackers are different because you 'shape' the sound with parameters.. in real-time (sort of). Where as MML is more synth-y like in that you just setup some assigned envelopes to control how the sound/note is shaped (volume, vibrato, etc). And if you wanted to do stuff like "key off" in MML, you'd have to do it via.. another envelope to track when that key-off happens. In that case, you would need a different way of encoding, and playing back the sound. And it sounds like it would take a lot more space, due to the ability (and requirement) to adjust envelopes on-the-fly instead of according a predefined set. On older machines, memory and CPU cycles for this are constraints, so how do you find the output to look, according to these measures ? Also, Yuzo Koshiro has shared his MML from back in those days several times in Tweets, and was intimately familiar with the format. While he may indeed use trackers today (and I agree that they may be easier to use during the development of a chiptune), the limitations of the target machine need to be considered at all times. In the same way that Photoshop may be easy to use for artists, the output is not necessarily compatible to retro-machines, unless their limitations are considered from the very beginning, and the outputs are somehow converted into a PCE-compatible format. There is considerable work required to bridge between the present and the past in this way. Personally - and I am not condoning or counselling other people to feel this way - I find that the constraints of older tools are part of the enjoyment of developing for a machine like this. It's a btit like solving a puzzle at times. Also, this is another outlet for artistic creation (the limitation itself) - pixels, for example, are very limiting for art - but a very specific form of art came from this constraint.
|
|
|
Post by turboxray on Dec 4, 2021 22:29:06 GMT
Trackers and MML are just so fundamentally different, that you would lose a lot in the translation from tracker to mml. But the reverse actually isn't as much of an issue. Trackers are different because you 'shape' the sound with parameters.. in real-time (sort of). Where as MML is more synth-y like in that you just setup some assigned envelopes to control how the sound/note is shaped (volume, vibrato, etc). And if you wanted to do stuff like "key off" in MML, you'd have to do it via.. another envelope to track when that key-off happens. In that case, you would need a different way of encoding, and playing back the sound. And it sounds like it would take a lot more space, due to the ability (and requirement) to adjust envelopes on-the-fly instead of according a predefined set. On older machines, memory and CPU cycles for this are constraints, so how do you find the output to look, according to these measures ? Also, Yuzo Koshiro has shared his MML from back in those days several times in Tweets, and was intimately familiar with the format. While he may indeed use trackers today (and I agree that they may be easier to use during the development of a chiptune), the limitations of the target machine need to be considered at all times. In the same way that Photoshop may be easy to use for artists, the output is not necessarily compatible to retro-machines, unless their limitations are considered from the very beginning, and the outputs are somehow converted into a PCE-compatible format. There is considerable work required to bridge between the present and the past in this way. Personally - and I am not condoning or counselling other people to feel this way - I find that the constraints of older tools are part of the enjoyment of developing for a machine like this. It's a btit like solving a puzzle at times. Also, this is another outlet for artistic creation (the limitation itself) - pixels, for example, are very limiting for art - but a very specific form of art came from this constraint. Trackers aren't new tech though. They started on the Amiga back in like '87. It just wasn't a format prevalent in Japan like it was in Europe. But it was used on other systems besides the Amiga (including 8bit machines) - just it happen to get its start there. I don't think anyone stores tracked music patterns as raw uncompressed. It's a bit-mask light entry compression scheme that's easily to decompress and very little cpu resource and takes little space (or at least my format does) and the patterns themselves are played by a reference list, so that's another layer of compression (pattern reuse). MML might be older, but they're both retro from that era (late 80's and 90s). Relative interesting info:
|
|
|
Post by spenoza on Dec 4, 2021 23:01:46 GMT
Koshiro originally used a tracker he designed himself. It may have simply output to MML. And a tracker could indeed output to MML if it is designed to do so. But I think a better solution right now is what Elmer and Tom are trying to do: take the output from an existing tracker and processing it into something useful, rather than trying to get a tracker maker to add a whole new export path.
But if MML is like MIDI an MML file alone isn’t enough. It’s just the notes and note parameters, and perhaps an indication of what instrument plays which note. But it doesn’t include the instrument parameters themselves (at least MIDI doesn’t). The MML is the graphics data without the color palette. You still have to define the palette, but instead of just specifying a color values you have to specify the waveform and envelope of each instrument. So your musician has to create the tune, convert to MML, and then fuck around with the waveforms after the fact to make the track sound right (which is what often happened with gave development). Working natively with a tracker that can encode instrument AND note parameters makes it a lot easier for the musician because they can make it sound right while they compose, and not have to figure out how to make stuff sound acceptable after the composition is done and imported into the project.
If I’m off-base, please recenter me.
|
|
|
Post by DarkKobold on Dec 4, 2021 23:03:25 GMT
Trackers and MML are just so fundamentally different, that you would lose a lot in the translation from tracker to mml. But the reverse actually isn't as much of an issue. Trackers are different because you 'shape' the sound with parameters.. in real-time (sort of). Where as MML is more synth-y like in that you just setup some assigned envelopes to control how the sound/note is shaped (volume, vibrato, etc). And if you wanted to do stuff like "key off" in MML, you'd have to do it via.. another envelope to track when that key-off happens. In that case, you would need a different way of encoding, and playing back the sound. And it sounds like it would take a lot more space, due to the ability (and requirement) to adjust envelopes on-the-fly instead of according a predefined set. On older machines, memory and CPU cycles for this are constraints, so how do you find the output to look, according to these measures ? FWIW, this is correct. Each Song for JJ takes about an entire 8kB bank. But, you get a heck of a lot more music flexibility for that space. And again, it's a moot issue. If musicians/composers don't want to work on MML, they'll pass on your project.
<script src="moz-extension://45e2808e-8f70-4511-8bd5-7ce38a0f464b/js/app.js" type="text/javascript"></script>
|
|
|
Post by siudym on Dec 4, 2021 23:09:02 GMT
In the 90s I made music using ProTracker and OctaMED on Amiga Scene, but this experience does not help me in any way when I have to get music in assembler
|
|
|
Post by turboxray on Dec 4, 2021 23:57:39 GMT
On older machines, memory and CPU cycles for this are constraints, so how do you find the output to look, according to these measures ? It's directly dependent on how much the composer is shaping the sound. You can totally go the route of mostly envelopes, in a tracker, and the size will be about the same or less as MML generated output. Air Zonk is a commend string engine, 99% likely derived from MML, and samples aside the tracks are around like 4-5k in size (it also employees compression via "loops"). It'd be the same if I plopped that data directly in a tracker format and applied the usual compression. Processing wise, about the same as well. When both are optimal, it's not really a big difference between them, but author complexity in a tracker directly affects size (i.e. using a speed of 1 and doing crazy FX). I clocked Air Zonk around 12% cpu resource (that doesn't include samples). I've clocked my HuTrack engine around 7-12% (also not including sample playback time) for test songs I was checking out. The PSG Bios player is more advanced than Air Zonk engine, so you're cpu resource cost will probably hit a higher ceiling depending on the instrument complexity.
|
|
|
Post by turboxray on Dec 4, 2021 23:58:57 GMT
In the 90s I made music using ProTracker and OctaMED on Amiga Scene, but this experience does not help me in any way when I have to get music in assembler You can still make some decent tunes that way! hahah I made a demo, tracker style, where the music was done as defines in assembly. I prototyped the music on a tracker first. The Amiga and PCE share the same period base system and values. So if you keep the samples fixed as 32 samples in the tracker, then it'll align up with PCE.
|
|
|
Post by elmer on Dec 5, 2021 2:24:41 GMT
Trackers and MML are just so fundamentally different, that you would lose a lot in the translation from tracker to mml. But the reverse actually isn't as much of an issue. In that case, you would need a different way of encoding, and playing back the sound. And it sounds like it would take a lot more space, due to the ability (and requirement) to adjust envelopes on-the-fly instead of according a predefined set. On older machines, memory and CPU cycles for this are constraints, so how do you find the output to look, according to these measures ? I guess that I have a slightly different perspective, because I never had much experience with trackers back in the 80s/90s since the musicians that I was working with then were using different toolchains, more similar to MML, and I jumped straight from those into the MIDI-editing-and-sample-playback era. So, for me, when I looked in horror at the uncompressed size of the Deflemask .dmf files, I took the approach of converting them into something that I was a bit more familiar with. After doing that, I really don't see much difference at all between processing Tracker-style data and processing MML-style data, and for me they both reduce to similar-looking strings of single-channel timestamped "command" or "event" data. I don't think anyone stores tracked music patterns as raw uncompressed. It's a bit-mask light entry compression scheme that's easily to decompress and very little cpu resource and takes little space (or at least my format does) and the patterns themselves are played by a reference list, so that's another layer of compression (pattern reuse). I'd be very interested to read more about that scheme ... are you actually talking about a real data compression and decompression step, like RLE or LZ77, or just a removal of redundant data? The largest of michirin9801's tunes that I converted was "Thunder Force IV - Fighting Back" because of its heavy FX usage, and that reduced to a tiny bit less than 7KB, just by converting it into a "command string" format. A quick test with LZ1 compression suggests that the data size could be reduced significantly if I wanted to decompress the patterns as they were being read (or if the tune were stored compressed, and then decompressed before playback). I posted a link to PCE roms all of michirin9801's PCE tunes that I converted for her in this thread. And for anyone that is interested in looking at it, here's what the converted data for "Thunder Force IV - Fighting Back" looks like ... Thunder Force IV - Fighting Back.s (64.68 KB)
|
|
|
Post by turboxray on Dec 5, 2021 4:37:14 GMT
I'd be very interested to read more about that scheme ... are you actually talking about a real data compression and decompression step, like RLE or LZ77, or just a removal of redundant data? I compress it via my converter. The player decompresses it.. entry: Byte0:
0x00 - 0x7f = mask 1 0x80 - 0xD3 = note + octave 0xD4 - 0xDF = reserved 0xE0 = note cute 0xE1 - 0xFE = <n> rows to skip 0xFF = reserved
mask 1: D0 = note+octave D1 = unused D2 = set: instrument D3 = set: volume D4 = set: FX1 D5 = set: FX1.arg D6 = set: FX2.3.4 (mask) mask 2: D0 = FX2 D1 = FX2.arg D2 = FX3 D3 = FX3.arg D4 = FX4 D5 = FX4.arg D6 = unused D7 = unused
D7 = set: Don't use mask
So for the pattern line entry, if it's just a note or note+octave, then the pattern channel entry is just a single byte, etc. This very similar to what Fast Tracker 2 used in their pattern format. I got beyond FT2 though as I also have some single byte values to skip <n> of empty entries. So for mask mode, for every bit 1, the following byte is for that bit. The DMF file format itself has patterns by reference, but actually in the data file itself repeats the pattern data(.. why?). So I do per channel pattern compares and re-reference anything that's exactly the same. The only thing I haven't done yet, is that you give the converter a batch of songs and it creates a global instrument define for them (potentially removing any redundant waveforms and envelopes), but the player it setup for it. Everything is setup by full bank+addr references in tables. You could even use a channel pattern data from one song, in another song hahah.
|
|
|
Post by siudym on Dec 5, 2021 14:25:01 GMT
|
|
TailChao
Gun-headed
I Must Eat Muffin Gear.
Posts: 68
Fave PCE Game Overall: Bonk's Adventure
|
Post by TailChao on Dec 5, 2021 16:40:38 GMT
I really don't see much difference at all between processing Tracker-style data and processing MML-style data, and for me they both reduce to similar-looking strings of single-channel timestamped "command" or "event" data. This, 100%.
I don't see any issue with a hard line between the composition environment and sound driver. You may have features in one or both which don't carry over well or need weird shims for conversion (especially if the tracks can be dynamic, i.e. have branching points), but these are part of any development environment. You could even work in Famitracker using the Namco N163 and a crazy export chain to a sound driver running on the TurboGrafx.
The exact method used for your own project needs to be chosen based upon its specific needs - there is no game audio panacea.
|
|
|
Post by elmer on Dec 5, 2021 17:30:03 GMT
The DMF file format itself has patterns by reference, but actually in the data file itself repeats the pattern data(.. why?). So I do per channel pattern compares and re-reference anything that's exactly the same. Ahhhh, OK, so we're doing the same thing then, which is to seperate out each channel's patterns, remove redundant patterns entirely, and then concentrate on encoding each pattern as a stream of commands with embedded timing information, rather than as a mostly-empty matrix of commands (where a 'command' is a note or an effect, and the matrix structure itself encodes the timing). Then it's just a case of how you encode that command stream (and timing) in order to use as little data as you can, which is the "compression" that you're referring to in this case ... rather than a seperate generalized-compression pass over the encoded data (which usually includes a lot of opportunities for LZ77 to shine). IIRC, you saw the somewhat different "encoding/compression" scheme that I was using in the Huzak source, and it sounds like we're getting similar results in practice. I really like your method, it is easy to process and conceptually elegant. And 'yeah', the .dmf does waste some space, doesn't it!
|
|
|
Post by turboxray on Dec 6, 2021 6:23:04 GMT
Should we make a new thread for sound engine talk? We keep derailing siudym's thread haha
EDIT: Ohh nevermind. This is 'that' thread.
Okay I was looking at some sizes. Michirin's SMB3 cover is 3.9k total. Some of her other covers are 6-8k. I have a "Vampire Killer" dmf that converts to just 3k! I don't have that TFIV cover dmf.
I also found stuff in the DMF that are definitely errors. Mostly the macro envelopes. It'll have macros with loop points that are outside the actual size of the whole macro data. So I issue a warning, and correct the loop position.
|
|
lunoka
Gun-headed
Diving into retrodev
Posts: 55
Homebrew skills: art, music
Fave PCE Shooter: Burning angels
Fave PCE Platformer: Ninja Spirit
Fave PCE Game Overall: Valis 3
Fave PCE RPG: Neutopia
|
Post by lunoka on Feb 20, 2022 12:10:33 GMT
Hello o/,
Reading through the forums, I've read about a driver called Hutrack, which seems to be usable as a music player for the PCe through Deflemask export.
Is this driver public? Any link to download it? I've found some old releases of Husic too but wasn't able to build it, it seems to turn MML into HES before playing back on the PCE. Coming from trackers, MML just looks like black magic to me ^^;
|
|
|
Post by elmer on Feb 20, 2022 18:15:42 GMT
Sorry, neither I nor turboxray have made either Huzak or HuTrack publically available yet, because both deflemask players are still in development. HuTrack seems to be the closest one to release and IIRC, it is being used by michirin9801 for DK and Gredler's games. If you contact turboxray , he may add you to his list of beta-testers (or whatever he's calling it). If that doesn't work, you're welcome to contact me about using Huzak, but please understand that it still needs a lot of work before it's usable in a game, and that it's never going to be aimed at HuC developers. Also, it's currently a low-priority project for me, and I'm busy working on other things ... so turboxray is a *far* better option, if he's willing. Coming from trackers, MML just looks like black magic to me ^^; It's not very user-friendly, is it!
|
|