|
Post by dshadoff on Dec 7, 2021 21:30:11 GMT
In any case, for clarity it might be best to use the terminology in the documentation.
|
|
pokun
Gun-headed
Posts: 85
Homebrew skills: HuC6280 assembly
|
Post by pokun on Dec 8, 2021 21:18:57 GMT
As in the official documentation? I agree.
|
|
|
Post by turboxray on Dec 8, 2021 23:44:40 GMT
If it's backwards/awkward/unintuitive.. why continue to propagate it? I mean just because it's official, doesn't mean you need to use it. I mean it's not even consistent with BAT nomenclature. Clarity > than official docs. No different than the official docs calling the audio unit "PSG", but people have broken away from that for a more accurate nomenclature.
|
|
|
Post by dshadoff on Dec 8, 2021 23:56:32 GMT
Primarily so that we can understand each other. If we don't like their terminology, we can indeed change it - but in that case we should adopt new terms, rather than trying to reverse the meanings of existing terms.
|
|
|
Post by elmer on Dec 9, 2021 2:58:56 GMT
If it's backwards/awkward/unintuitive.. why continue to propagate it? I mean just because it's official, doesn't mean you need to use it. I mean it's not even consistent with BAT nomenclature. I absolutely agree that we don't need to follow mistakes or bad-examples from the past, just because they were made by authority-figures, but I just don't feel as strongly as you do on this issue. The official HuC6270 docs call the copies of data in VRAM the "BAT" and the "SAT", and the copies of the final data inside the HuC6270 chip are called the "background shift register" and the "sprite shift register". I don't see any inconsistancy, yet. I have absolutely no problem at all using the word "buffer" for the area of the HuC6270's silicon where it stores a copy of the SAT, so that it has unrestricted access to it during every scanline while deciding which sprites to display on the next scanline, and without having to read from the "original" SAT in VRAM. Equally well, I can see the argument for calling the copy inside the HuC6270 the "original", and the copy in VRAM the "buffer". I don't see either nomenclature as being indisputably "incorrect", it's all just a matter of perspective IMHO, and from my POV, that perspective could easily change depending upon which side of the bed you get out of in the morning. From Wikipedia ... (not that Wikipedia is the source of all "true" knowledge) In computer science, a data buffer (or just buffer) is a region of a physical memory storage used to temporarily store data while it is being moved from one place to another. Typically, the data is stored in a buffer as it is retrieved from an input device (such as a microphone) or just before it is sent to an output device (such as speakers). However, a buffer may be used when moving data between processes within a computer.
|
|
|
Post by dshadoff on Dec 9, 2021 3:19:50 GMT
The other thing to consider is that a "buffer" is used for faster/direct access to data without requiring the overhead of fetching from a more remote place (or waiting for response)... which is exactly the purpose here. I can understand the use in this context.
In FPGAs, the internal on-chip memory structures are called BRAM (block, not buffer), and can be set up as single-port, dual-port, etc.; in implementing cores for MiSTer, there is often a need to fetch data from SDRAM or DDR3 which have longer latency; when BRAM is used to aggregate the data, or make it available for any period of time, this use is also referred to as a 'buffer'.
So, I don't see its use as being 'wrong' either...
But to be honest, I think the conflation between SAT and SATB started back in the original MagicKit days where they seemed to often be used interchangeably; people seem to have mis-learned from the erroneous documents from that time.
|
|
|
Post by turboxray on Dec 9, 2021 4:46:31 GMT
Yeah, it's definitely not a big deal. At the end of the day, it doesn't hurt anybody or stop anyone from figuring it out. I personally have to deal with this kind of stuff on the daily as an embedded software dev, so I just see no value in upholding the ambiguity - especially just because there's accepted examples of ambiguity.
/rant
Charles Macdonald, Ryphecha (mednafen), Exodus, etc didn't interpret the vram location to be the 'buffer' and the internal registers to be the actual SAT.. by simple accident. It's a modern and standard way of looking at things, so it matched expectations. A buffer is a temporary holding system, a stage in a line of transmission. If I decode a compressed stream to a buffer, and then to final destination into whatever memory and device, I wouldn't call the location final decompressed location of the code/data a 'buffer'. When I worked aerospace, I wrote a transmission mechanism that had a series of ring buffers and locks/syncs, across PCIE bus from processor to processor. Every stop along that way was a buffer. The final destination wasn't a buffer - was memory/data (ready available for use). If I had put in my documentation that it ended up in a 'buffer', I would have confused quite a bit of people haha ('Where does it go after that???'). Yeah, it's semantics because it's a matter of perspective. From the perspective of the hardware engineer the BAT or SAT value in the 'shift register' deep inside the pipeline could be the final destination. But for a software engineer, that's irrelevant. Once it gets to the SAT (internal regs) or BAT.. that IS the final destination. From their perspective, nothing exists past that point for that data.. or rather, it holds no relevance to them and what is done with it.
I mean you could go even further and say it doesn't even stop there - because it gets transmitted via the digital pixel bus. My point is that hardware engineers don't always follow the software standards when it comes to terminology. You'll definitely find examples of mix usage. I still see hardware engineers refer to what they consider the smallest (or just typical) element in a system as a 'WORD'. Or a WORD to them is '32bits' because that's the bus width, or register width, etc. While most everywhere else in the software world 'WORD' typically means 16bits, and double word (DWORD) means 32bits, etc. They're not wrong, per se, as you can find examples where documentation backs it up.. but at the end of the day, especially to the software dev, it just adds confusion and has no value. I've had this direct experience, and arguments with hardware engineers over this (because some of them wrote code too). I've seen hardware engineers that thinks bits should be descripted starting with 1 and not 0, etc. Or that little endian also refers to bits and not just bytes.
|
|
pokun
Gun-headed
Posts: 85
Homebrew skills: HuC6280 assembly
|
Post by pokun on Dec 9, 2021 15:42:28 GMT
Yeah neither is really wrong. The reason I feel it's better to go with the official nomenclature is simply because it's official and much more established. It's probably what all developers back in the day were used to and what homebrewers using Develo were used to. If old source code of games is going to show up like with the Nintendo giga-leak, they will most likely use the official naming. The official documentation (which I personally think is very well written in many ways and easy to understand as a layman, unlike all of Nintendo's official docs that I've seen) will not go anywhere and will be used by future homebrewers, it's not possible to change history. The work of homebrew pioneers like Charles McDonald (which are also well written and fills some of the holes in the official docs) will probably also be used in the future, but it is still a brief (though important) part of homebrew history, so I think going back to the official nomenclature is the most painless road to take. And as people are starting to become aware of the mixup I don't think it will cause any more confusion for newbies than it already is.
Regarding word size, I'm taught that a "word" is the size of the data bus in a system, which would be 8 bits if using an 8-bit CPU. Yet "word" almost always seems to mean 16 bits on both 8-bit and 16-bit systems. It has always confused me, but I guess it's because "byte" is already defined as 8 bits on these systems (on some older systems a byte can be 6 bits, 10 bits or other sizes) so there is no need for another term for the same thing. Data sheets for memory chips like EPROM, EEPROM or SRAM is about the only place where I've seen where "word" actually being used to mean 8 bits, and they don't use "byte" at all. I guess it's the hardware designers that stick with those rules.
On 32-bit systems they do seem to follow the rules and use "word" for 32 bits, half-word for 16 bits, double-word for 64 bits and byte defined as 8 bits as usual.
Also what's that about the PSG? Aren't wavetable chips basically PSGs with programmable waveform shapes? Hmm I see that wavetable sound chips are under their own classification on Wikipedia separate from PSG, FM and PCM.
|
|
|
Post by turboxray on Dec 11, 2021 18:10:10 GMT
so I think going back to the official nomenclature is the most painless road to take. And as people are starting to become aware of the mixup I don't think it will cause any more confusion for newbies than it already is.
From the perspective of "the most painless":
The most painless way, going forward, is to simply add a foot note indicating historical documentation slightly differs. If people are only now becoming aware of the mix up, then it's wasn't an issue for the past two decades - and it shouldn't be an upset for noobs now either. What has changed? So leaving it as-is is definitely a non-issue, because the past has indicated that.
Who is the 'noob' here? A software developer? Asm or high level? Someone working on cores for FPGA and/or emulation? I think out of that group, the only one that could potentially be affect by a vague mislabel is the person working on FPGA/emulation that's new to the PCE.. at which point, the description of how this works clears EVERYTHING up. Once you read that there's a copy of the SAT in vram, and an internal memory SAT, and a DMA is the only way to move it from the vram copy to the internal copy.. it doesn't matter if the internal memory is labeled SAT instead of SATB. The name for the DMA is vague enough that it actually avoids any confusion; it's not called DmaMySatCopyFromVramToSATB. And the official documentation and patents apparently weren't clear enough.
So what does it mean to change everything going forward? There's more unofficial documentation, tutorials, etc spread out there than there is official documentation. I would think switching it now would make it more painful, because before you only had one discrepancy (hard to find official docs, and at that it's sooo vague it eluded experts/knowledgeable people for over two decades). And now you want to introduce more fragmentation to conflicting terminology? How is that most painless? That's the opposite of painless.. that's just going to add more confusion to something that was a non-issue for 20+ years.
What about all the people that don't get the memo about the change? I open up mednafen, and when I see the section 'SAT' in the memory viewer, the first thing I think is that this is the actual SAT - not the copy in vram. If you changed the label to be SATB, now I'm confused. I mean I'll eventually figure it out, but I'm gonna be annoyed as fuck until I do, and still annoyed for sometime afterwards.
I'll keep with 'SAT' as the description instead of SATB for the internal regs, for my documentation/tutorials/etc, but add a footnote in case emulators/debuggers change it. And I'll probably change my language for the describing the vram copy to be just the 'SAT copy' in vram instead of referring to it as SATB, going forward.. just in case this pedantic-ness plays out.
Intel, 68k (which is a 32bit processor regardless of the original bus/alu size), etc. WORD is 16bits. Microsoft has been defining WORD as 16bit, and DWORD as 32bits, and QWORD as 64bits, etc for decades. Last I checked, my calculator in "programmer" mode also follows that same nomenclature. My experience with C source code over the years, often see WORD as a typedef for short, file format descriptions that use 16bit WORD and 32bit DWORD, etc. Is your professional background hardware and not software development? I knew things like memory and such, defined WORD differently (and from working with PICs), but I always kept hardware and software terminology separated. My issue with the engineer is that they thought everyone would assume 'WORD' meant 32bits in the software world.. and it does not. Since this was C/C++ they should have used int32 as the type to describe 'shifting the word' or whatever, or made it clear in the comment that it's a '32bit' WORD. 'Programmable Sound Generator' is pretty generic haha. I mean, almost anything that outputs configurable sound would fall under that vague definition. But I think that label has been applied mostly to sound chips that generally limited to square wave tones. Even BITD, I thought PSG meant 'Pulse Sound Generation' (pulse as in square) because it was used to describe POKEY, AY, etc. I've seen the official PCE docs refer to the sound as PSG, but also PSGWSG. I've heard others split what is PSG and what isn't, by whether it has a hardware ADSR type envelope system. I.e. that makes it a 'synth' chip, and thus PSG is not synth. But then that would classify 'Paula' as PSG and 'SID' not as PSG, and that leads to all sorts of confusion haha.
I pseudo-minored in linguistics (one class away from a minor). Words are a powerful thing, and more so that we tend to realize. They change our perception of things. I mean just look at the term '8 bit'. People already struggle with what that means, but on a subconscious level - when a label or term gets applied, people will automatically view and evaluate things differently. All because of a word. I've seen comments where a gamer saw the word 'PCM' for describing the sound chip of the PCE in some 90's magazine, and so to them it meant the sound was modern and advanced. I've also seen people refer to the sound chip as PSG, which carries a lot of implications and expectations. If something is given a term, you WILL see that thing through the lens of suggestion/interpretation. I've seen a lot of people comment in the early 2000's that the PCE sounds just like an NES, and that's 'because it's PSG'. I think a lot more people were ignorant of the PCE, and lacked any exposure, coupled with the term 'PSG' and it not sounding exactly like FM or SNES-ish.. that it was slightly upgraded NES sound. These discussion were all over the place in the 2000's, in lots of forums. If you're told something is more advance, chances are you'll see it as more advanced, and if you're told something is inferior (directly or indirectly) - you'll see it as inferior. It's qualitative vs quantitative. It's funny because I've seen it where changing someones mind that it's WSG instead of PSG, had that very same effect. So I don't mind classifying it as 'WSG' or something else, simply because of this.
|
|
|
Post by dshadoff on Dec 11, 2021 20:51:59 GMT
Uh... without quoting this entire long post, I'll say two things:
1) SAT and SATB are basically used interchangeably throughout unofficial documents. "Clarifying" the usage will not clarify anything, because the usage is not consistent. It assumes that the reader will infer context and follow along. Sloppy, but it hasn't really been a bone of contention. I'm only saying that it's not precise, and not "correct". Perhaps for this reason,these terms shouldn't be used if clarity is the goal.
2) 'WORD' has meant different things to different people since the dawn of its usage. I think it's really only Microsoft who believe that it means 16 bits anymore, but that's a pretty big group of users. Anybody who has been around long engouh (and there are fewer and fewer of us) would realize that it is a vague term, and requires additional clarification in a given context. Perhaps for this reason, it shouldn't be used if clarity is the goal.
Side note: Word came into existence in the 16-bit era, when it meant the size of the data bus and internal register size. But then, 16-bit processors with 8-bit busses (among other things) came along and destabilized things. For most of the past 20 years, it has meant 32 bits except in the Microsoft/Intel world. Even "int" was dicey there for about 5-10 years, but it settled down.
|
|
pokun
Gun-headed
Posts: 85
Homebrew skills: HuC6280 assembly
|
Post by pokun on Dec 12, 2021 22:36:10 GMT
1)
As far as I can remember the official documents are very clear and consistent about the usage of SAT and SATB, not vague at all. It's probably the MagicKit documents that started mixing things up like Shadoff said earlier.
People started becoming aware of it around the time the official English documents started to become widespread which is some years ago. I have stuck to the official usage in all my notes since then.
For newbies I meant people new to PC-Engine development like me when I first studied the documents. Assembly is what I use but anyone learning the hardware would be a newbie. Annoyed, yeah that describes my personal experience pretty well with Mednafen and these homebrew documents. I eventually figured it out but I thought for some time that I misunderstood something as it was so consistently wrong in homebrew docs.
The issue is that homebrew pioneers got it wrong in the first place and the best fix is to mend that mistake as early as possible IMHO. Sticking to the swapped naming longer would only add to the confusion. I don't mean to mend older documents by Charles McDonnald etc, I mean to stick to the official naming when making new things.
No matter what you do the damage is already done. A footnote about the discrepancy in documents is required either way.
2) Well that explains pretty well why there are so many that assumes that a word is a universal term for 16-bit. Isn't 68K traditionally considered 16-bit due to the data bus being 16-bit though? Then again with that logic the 65816 would be 8-bit since it uses an 8-bit data bus, but everyone still calls it an 8-/16-bit hybrid. I've even heard people calling the Z80 16-bit just because it has a few 16-bit operations.
32-bit ARM seems to stick with 32-bit words anyway. For modern x86-64 computers in 64-bit mode wouldn't it make more sense if they used 64-bit words? As for my background, I'm a layman, I have no engineering education except for a few programming and digital logic courses at university. I did also in fact minor in linguistics.
3) Heh I'd agree if someone says that an AY-3-8910 or about any other PSG with square tones sounds Nessy, but not the PC-Engine's PSG. Even though a PSG and a WSG are used very similarly their sound characteristics are quite different so I guess there is a good reason to keep them separate. If we didn't consider sound characteristics even the SFC would have a PSG since it's also used in a similar way.
The Nes has pretty generic rectangle and noise voices but the triangle voice makes it stand out a bit since triangles are not so common on PSGs. The PC-Engine is harder to put a finger on what defines its character due to its more versatile wavetable voices, and this is even more true with the SFC. It's both a strength and a weakness, more versatility but less character. On the other hand I think the Game Boy has VERY distinct sound, and it's probably because of it's mixing a PSG with a single wavetable voice. Modern stuff has pretty much no character at all since they can sound like anything you want. It's these limitations that gives old systems their charm.
|
|
|
Post by dshadoff on Dec 12, 2021 22:53:16 GMT
As far as I can remember the official documents are very clear and consistent about the usage of SAT and SATB, not vague at all. It's probably the MagicKit documents that started mixing things up like Shadoff said earlier. The MagicKit and mednafen documents and code sloppily used the terms interchangeably. That's where it came from. The original documents were consistent but the homebrew docs were not. This is why I don't think that they can be "simply redefined" as turboxray describes... because they don't have a consistent meaning within those homebrew apps.
|
|
|
Post by turboxray on Dec 13, 2021 2:04:20 GMT
Speaking of consistency.. you'll sometimes see 'bank' and 'page' used interchangeably for PCE stuff.. when they are definitely not the same thing haha (I was guilty of this early on). It was even the cause of a bug that I had recently found by ULI in HuC ASM library.
|
|
pokun
Gun-headed
Posts: 85
Homebrew skills: HuC6280 assembly
|
Post by pokun on Dec 13, 2021 16:49:01 GMT
As far as I can remember the official documents are very clear and consistent about the usage of SAT and SATB, not vague at all. It's probably the MagicKit documents that started mixing things up like Shadoff said earlier. The MagicKit and mednafen documents and code sloppily used the terms interchangeably. That's where it came from. The original documents were consistent but the homebrew docs were not. This is why I don't think that they can be "simply redefined" as turboxray describes... because they don't have a consistent meaning within those homebrew apps. I see, yeah I agree.
I guess "bank" and "page" are another thing like "word", in that there is no universal meaning. Even things like "bank switching" is referred to as "paging" in some communities. In the 6502 world a page seems universally 256 byte of address space because that's the size of the zero page and stack page, and in the 65816 world a bank is the high byte in a 24-bit address (so 64 kB), but people tend to mix in other bank sizes as well which always confused me.
I think the official English PC-Engine dev documents also used another term for what the homebrew scene calls "banks", I think it was "blocks". I don't think this is really a problem though and either is fine and understood from context.
|
|
|
Post by turboxray on Dec 13, 2021 22:33:35 GMT
I meant that I've seen coders refer to the page, or MPR, as the 'bank'. As in they're using the term bank to refer to logical memory. Like "vdc ports are in bank $0000 area", when it should be "vdc ports are in page $0000 area", etc. It can lead to confusion.. and the bug I saw was "lda #bank(label), TAM #bank(label)", when it should have been "lda #bank(label), TAM #page(label)". It just happened to work because the bank number and the page number was the same by accidental coincidence. Once I added a new bank, that bank number no longer had that happy coincidence and things crashed hahah.
|
|