|
Post by gredler on Nov 11, 2019 0:13:00 GMT
The directional lighting is very nice, especially in the last shot. It does make sense that asymmetrical lighting would eliminate most chances to mirror art, and the modular kit to build the structures carries forward into most workflows today.
Thanks for another breakdown!
|
|
|
Post by Black_Tiger on Nov 11, 2019 2:05:53 GMT
Seiya Monogatari's variety in structures stands out more than LoXII while playing through them. Even when a game is just cobbling the same assets into different layouts, not cookie cuttering the same buildings, rooms, etc produces a feeling of something special, even before you put your finger on it.
|
|
gilbot
Punkic Cyborg
Posts: 137
Member is Online
|
Post by gilbot on Nov 11, 2019 2:24:33 GMT
IMHO one of the biggest design mistakes of the 16-bit era, was when the design engineer of the Megadrive's VDP actually chose to use 2-bits of the BAT (aka the PATTERN NAME TABLE) for the X & Y flip capability, dropping the Megadrive down to 4 possible palettes instead of the 16 palettes that the PCE supports for its backgrounds. I think, that the MD being able to flip bg tiles in hardware was carried over from the SMS(the MD was essentially an upgraded SMS and there was no reason they threw away features already in the SMS), and this was one of the many design differences between the Famicom and the SMS, even though the two were considered to have similar capabilities(though being able to use 4bpp patterns was definitely a plus for the SMS, it's not a triumph win all things considered). This "advanced" feature in bg maps prompted developers to abuse the background plane a lot (e.g. in Space Harrier nearly everything on screen was actually bg tiles) and it also saved some space (I'm not sure, but I've read somewhere that the smooth 3-D dungeons in Phantasy Star couldn't be done without this feature), but this abuse also caused its games' graphics to have A LOT of symmetry (in Space Harrier nearly everything is symmetric), and it being unable to flip sprite tiles in hardware forced programmers to either stuff sprites facing different directions in VRAM or have the sprite data reuploaded(usually the player's sprite) every time things had changed. On the other hand, while the Famicom's lack of hardware bg tile flipping was a disadvantage, it at least had hardware sprite flipping, which was essential. At least the MD had sprite flipping (it was really stupid if they didn't even add this feature in the "16-bit" era). For the palette part, the MD was already an improvement compared to the SMS(which as far as I remember, there were two 4bpp palettes for the BG tiles, but the sprite got only one, which was also shared with one of the bg palettes, thus making palette-swapped sprites such as Players 1 & 2 impossible to be on screen at the same time unless you have two copies in VRAM), but still having only 4 palettes shared by both sprites and backgrounds in the MD was really awful (even the Famicom had 4 sprite palettes independent of the 4 bg palettes, though each palette was only 2bpp). Then again, the PCE was designed by a group of folks that developed games, and had the experience of having to work their way around the hardware-limitations of other machines, and the design engineer of the Megadrive's VDP has famously said that he never talked to any of the software guys while designing the chip. IMO Hudson developed the PCE to be a heavily beefed up Famicom (the real "Super Famicom"). They were familiar with the limits of the Famicom, so they made a system that Famicom software developers could easily adapt to (by easy I meant almost immediately, considering the fact that most of their development tools were the same binaries that emitted stuff for both platforms). If you can do something in a certain way with the Famicom, you can do the same in a very similar way with the PCE (and usually do it better, or have alternate easier methods to do it better). One obvious aspect was the CPU, it's almost the same thing as the Famicom but 4 times as fast, so you could use exactly the same codes as in Famicom but have most of the slowdowns eliminated without any new training. On the long term, it may be a wiser choice to use a 16-bit CPU, but as demonstrated by the SFC, which used a 16-bit upgrade of the 6502 so obscured that almost no one was familiar with (for the 68000 used in MD, at least a number of arcade game developers and those western developers who worked with Amiga/Atari ST were familiar with the CPU and had enough tools at their hands). And the 65816 on the SFC ran at a relatively low clock rate(which even varied in different situations, coupled with other hardware limitations) so its benefits were not apparent immediately as most programmers inexperienced with the architecture probably couldn't write optimised 16-bit codes (or, even wrote most of the codes in 8-bit, which made the PCE's high clockrate obviously favourable) initially, so a lot of early (and even some later ones) games had unbearable slowdowns, which made people thought it was a very slow system.
|
|
|
Post by turboxray on Nov 11, 2019 3:52:49 GMT
IMO Hudson developed the PCE to be a heavily beefed up Famicom (the real "Super Famicom"). They were familiar with the limits of the Famicom, so they made a system that Famicom software developers could easily adapt to (by easy I meant almost immediately, considering the fact that most of their development tools were the same binaries that emitted stuff for both platforms). If you can do something in a certain way with the Famicom, you can do the same in a very similar way with the PCE (and usually do it better, or have alternate easier methods to do it better). One obvious aspect was the CPU, it's almost the same thing as the Famicom but 4 times as fast, so you could use exactly the same codes as in Famicom but have most of the slowdowns eliminated without any new training. On the long term, it may be a wiser choice to use a 16-bit CPU, but as demonstrated by the SFC, which used a 16-bit upgrade of the 6502 so obscured that almost no one was familiar with (for the 68000 used in MD, at least a number of arcade game developers and those western developers who worked with Amiga/Atari ST were familiar with the CPU and had enough tools at their hands). And the 65816 on the SFC ran at a relatively low clock rate(which even varied in different situations, coupled with other hardware limitations) so its benefits were not apparent immediately as most programmers inexperienced with the architecture probably couldn't write optimised 16-bit codes (or, even wrote most of the codes in 8-bit, which made the PCE's high clockrate obviously favourable) initially, so a lot of early (and even some later ones) games had unbearable slowdowns, which made people thought it was a very slow system. Yeah, but the 65816 in the snes is so impacted by weird impositions, that even the PCE 7.16mhz 8bit cpu can do faster 16bit operations. SNES has wait-states on work ram (2.69mhz bus speed) which causes any cycles of an instruction that touches work ram to have slower overall cycle times. Being an Accumulator based design still, a lot of access is going to be zero page (direct page) for dynamic indirection. On top of that, there's work ram refresh wait states (262 times a frame). The 16bit modes also add addition cycle times to the 65c02 cycle count because the data but is 8bit and multiplexed. The 68k honestly is not really as impressive as people make it out to be. At least, the original isn't. If the 6809 ran at the same clock speed, it would have really took the shine from its bigger brother. Not that the 68k doesn't have its strengths, because it does, but it performs much better as general purpose processor in a computer system (all RAM vs ROM, relocatable code, etc). When it comes to hardware assisted systems like NES, Genesis, SNES, PCE, etc - it doesn't really have to do a whole lot. You don't have bus arbitration, so shared memory advantage of the 68k isn't a thing on the Genesis (it is on the Amiga and ST - video memory mapped into cpu memory space). The original 68k has a lot slow instructions. Interrupt driven services have a lot of jitter (one reason why the Genesis has dedicated hardware scroll tables in vram). Function calling is much more overhead than 8bit Accum processors (just for JSR/RTS alone). I mean it has nice features, like linear memory access, but it's not really a performance factor. Years back, there was multiple conversations about the 68000 vs the 65816 vs the 6280 (the famous Steve Snake participated too). In the end, optimized 6280 vs 68000 put them on par, which the 6280 pulling ahead in some cases. Exophase emulation author with a MIPS analyst on PCE games VS Genesis games, and found the MIPS on the PCE games were much higher than people realized. Why? Because game logic is filled with a lot of logic of simple flag/var tests and branched. You don't need 32bit registers and variable types for that. A lot of 16bit +/- 16bit or 32bit +/- 32bit operations on the 68k are done for speed, when most cases they can be done was 16bit+8bit or 24bit+8bit operations on the 6280 (early branch on no overflow saves cycles). The '020, '030, etc are impressive. I just don't think the 68000 in these consoles is that impressive as the hypers make it out to be (they just don't have 65x eyes haha). It's just easy to code for. Basically, it's hard to write slow assembly code for it haha. Leaving all the hardware as is, putting a 68k in the PCE at the same 7.16mhz clock would have actually slowed the system down for full screen raster effects. I mean, the NES (6502 vs 65c02 disadvantages) has done some really impressive stuff. A lot of early NES games slow down simply because they compress the hell out their games (realtime in game decompression isn't cheap). The 8x8 sized sprites slows things down when you have meta-objects that have like 6-9 sprites each.
|
|
|
Post by elmer on Nov 12, 2019 1:55:58 GMT
Years back, there was multiple conversations about the 68000 vs the 65816 vs the 6280 (the famous Steve Snake participated too). In the end, optimized 6280 vs 68000 put them on par, which the 6280 pulling ahead in some cases. Exophase emulation author with a MIPS analyst on PCE games VS Genesis games, and found the MIPS on the PCE games were much higher than people realized. Why? Because game logic is filled with a lot of logic of simple flag/var tests and branched. You don't need 32bit registers and variable types for that. Yep, this is a very old-old argument and IIRC, as you say, the conclusion was that the PC Engine can hold its head up against the rest of the 16-bit generation machines when in the hands of a decent programmer. On the long term, it may be a wiser choice to use a 16-bit CPU, but as demonstrated by the SFC, which used a 16-bit upgrade of the 6502 so obscured that almost no one was familiar with. And the 65816 on the SFC ran at a relatively low clock rate so its benefits were not apparent immediately as most programmers inexperienced with the architecture probably couldn't write optimised 16-bit codes (or, even wrote most of the codes in 8-bit, which made the PCE's high clockrate obviously favourable) initially, so a lot of early (and even some later ones) games had unbearable slowdowns, which made people thought it was a very slow system. You seem to be vastly overestimating just how different the 65C816 is from the 6502, and vastly underestimating how short a time it takes for a competant assembly-language programmer to learn a new CPU architecture (which the 65C816 really isn't). IMHO, what did take developers some considerable time on the SNES, was learning how to battle against the combination of the slow-clocked 65C816 processor, mixed in with Nintendo's strange memory mapping scheme, together with the VDP chip locking you out of doing any work on VRAM contents during the frame itself, which forces the time-consuming creation of DMA lists for the VDP to process during the vblank ... all on a CPU that either slows you down with long-address instructions, unnecessary 16-bit operations, or hits you with the cost of wasteful REP and SEP operations so that you can switch between 8-bit and 16-bit modes.
|
|
|
Post by spenoza on Nov 12, 2019 3:16:51 GMT
The Apple IIgs proved the CPU was a solid, perfectly functional design, even woefully underclocked as it was there, too. But they saddled it with better support hardware.
|
|
|
Post by elmer on Nov 14, 2019 0:48:01 GMT
The Apple IIgs proved the CPU was a solid, perfectly functional design, even woefully underclocked as it was there, too. But they saddled it with better support hardware. Yes, the 65C816 was "solid" and "perfectly functional" ... but it wasn't really "woefully underclocked" in either the SNES or the Apple IIgs, it was running as fast as it possibly could given the speed of affordable DRAM that was available at the time, and the design of the chip itself which meant that a complete memory access had to take place in 1/2 of a clock-cycle.
|
|
|
Post by spenoza on Nov 14, 2019 3:16:44 GMT
I read recently that Apple deliberately underclocked the CPU in the IIgs to avoid competing with the nascent Macintosh. Had they run with a higher clock speed the Mac would have had no performance edge at all, at least in the earliest models.
|
|
pokun
Gun-headed
Posts: 85
Homebrew skills: HuC6280 assembly
|
Post by pokun on Nov 14, 2019 15:29:22 GMT
That might just mean that Apple avoided choosing a faster CPU or faster RAM. The SFC was supposed to be backwards compatible with Famicom so I suppose Nintendo choose the 65816 because of it's 8-/16-bit hybrid qualities and 6502 emulation mode. Then when they scrapped the plans for backwards compatibility they decided the 65816 was still good enough or something I guess.
|
|
TailChao
Gun-headed
I Must Eat Muffin Gear.
Posts: 68
Fave PCE Game Overall: Bonk's Adventure
|
Post by TailChao on Nov 14, 2019 16:02:03 GMT
You seem to be vastly overestimating just how different the 65C816 is from the 6502, and vastly underestimating how short a time it takes for a competant assembly-language programmer to learn a new CPU architecture (which the 65C816 really isn't). Yeah, my biggest hurdle when I first met the 65816 was learning not to treat it like a 6502. Once the X any Y registers are in 16-Bit mode you can actually do civilized address+displacement math and arrays-of-structs instead of structs-of-arrays and oh this is like working with a more limited 6809. IMHO, what did take developers some considerable time on the SNES, was learning how to battle against the combination of the slow-clocked 65C816 processor, mixed in with Nintendo's strange memory mapping scheme, together with the VDP chip locking you out of doing any work on VRAM contents during the frame itself, which forces the time-consuming creation of DMA lists for the VDP to process during the vblank ... all on a CPU that either slows you down with long-address instructions, unnecessary 16-bit operations, or hits you with the cost of wasteful REP and SEP operations so that you can switch between 8-bit and 16-bit modes. Getting locked out of VRAM during active display isn't too big a deal, honestly. If we're comparing this to the PC-Engine's unrestricted access, that felt like an exception rather than an expected feature. I'm still pissed that NEC / Hudson didn't expose the HuC6270's registers and opted for using ports - it makes combining raster effects with background data transfers a real pain.
What was a pain in the SNES is its sprite controller, total garbage. Which is a shame, since the background and color math stuff felt very well thought out.
But pretty much "all three" have some design inconveniences and there's not much that can be done about it.
|
|
|
Post by spenoza on Nov 14, 2019 19:44:39 GMT
That might just mean that Apple avoided choosing a faster CPU or faster RAM. The SFC was supposed to be backwards compatible with Famicom so I suppose Nintendo choose the 65816 because of it's 8-/16-bit hybrid qualities and 6502 emulation mode. Then when they scrapped the plans for backwards compatibility they decided the 65816 was still good enough or something I guess.
Well, Apple wanted some of that backwards compatibility, too... But yeah, the 65816 wasn't the weak point of the IIgs, but rather Apple's hobbling of it to preserve Job's (stolen) pet project.
|
|