eSRAM question

#21SolisPosted 11/28/2013 8:42:12 AM
N3xtG3nGam3r posted...
Whoever told you that any "one thing" (a frame buffer) "lives" in esram doesn't know what they are talking about. There is no specific size requirement for anything, including frame buffers. Everything travels in and out of esram all the time, sometimes it all fits, sometimes it doesn't, doesn't matter. It's all cyclic buffers, move engines provide management, it's extremely similar to a "unified memory pool" except there is a fast middleman that makes up for a bandwidth limitation in DDR3.

If 720p is all that can be managed with a 32mb ESRAM on die... How would that explain the 360 doing 640-720p with a 10mb EDRAM on die, with the eDRAM being only a fraction of that size?

It's outrageous to explain any one "thing" being too big for a subset of ram because EVERYTHING is constantly moving inside and out of esram at any given time. Remember, this is not just the GPU's resources, this is the CPU, Kinect data, Audio data, everything. 32mb isn't "too small" for eSRAM because there is no single thing that needs to be there at any one time. The topic being discussed is the eSRAM. And the best way to utilize it, is by using texture tiling.

Oh, and lol @ the guy who thinks nobody here knows anything. Yes, there are a lot of party poopers hanging around this board, spreading lies, and trying to downplay anything/everything they can just to get a rise out of people. Im not even sure if the people trolling are even PS4 fanboys, they're probably just internet trolls in general. But also, there are a lot of people who don't understand fully, but are still capable of repeating things that people have said, who do know what their talking about.

This video has been available for well over a month, but people on this board either ignore it, or suffer from extreme amnesia/memory loss.

http://channel9.msdn.com/Events/Build/2013/4-063

As a cache, textures specifically would be a relatively poor use of the eDRAM as a whole since the memory bandwidth usage in that case would be very minimal. How much bandwidth do you think 32MB of texture data requires? When would the performance benefit of faster small scale texture reads outweigh the benefit of all the other uses the eSRAM could provide?

And as you yourself pointed out, it's a cache: that means the source for that data needs to be read from somewhere else to begin with. Also, if "everything travels in and out of esram all the time", then that means there's going to be a lot of bandwidth wasted as data is being constantly swapped in and out of the cache in order to be accessed by the system. Moving buffers from eSRAM to main memory is not free. In short, as a cache, the "theoretical" output of the eSRAM is only going to be dramatically reduced in practice. Oh, and the system is absolutely going to be drawing a framebuffer pretty much constantly anyway, and that sure as hell is going to be done on eSRAM first whenever possible.

Lets not pretend eSRAM is some kind of magic lamp. It's absolutely useful for some things, and as I've already pointed out it's wonderful for framebuffer drawing, but throwing out buzzwords like "6gb of textures" or whatever else the quote of the day is just ends up misleading people into thinking it does something it can't, and this topic is perfect proof of that. I mean hell, the very person I quoted even said they didn't understand how it worked, they just heard a number and thought it sounded amazing! That's exactly the kind of behavior we should be discouraging. Eliminating ignorance should go both ways.

And 32MB can definitely manage more than 720p (it can hold 2 double buffered 1080p frames, in fact). Who suggested otherwise?
---
"Walking tanks must exist somewhere for there to be such attention to detail like this in mech sim." - IGN Steel Battalion review
#22HENTAIDOJIPosted 11/28/2013 9:20:46 AM
Solis posted...
mizukage2 posted...
So, could a developer theoretically just ignore the eSRAM entirely and only use the standard RAM so as to avoid any headaches at the cost of whatever speed benefits the eSRAM provides?

No. It is entirely impossible to avoid using the eSRAM, that's where the framebuffer is written. In order for them to not use eSRAM, they would have to not have any visuals at all. However, this is rather irrelevant because writing the framebuffer to the eSRAM is no different from writing it to system/video memory on other platforms.


garcia_jx posted...
So does this mean that the Xbox One won't be able to achieve 1080p gaming due I in large part to the eSRAM?

The eSRAM is the very thing that would make achieving 1080p easiest, since it pretty much eliminates the bandwidth restriction of writing the framebuffer.


Actually I do believe it is possible to write the buffer to main memory. Unlike the 360 where it was impossible because the rops were coupled to the edram, thats not the case with xbone. But yes, theres little advantage to doing it exclusively.
---
http://nopybot.com/wp-content/uploads/2010/08/hentai-demotivational1.jpg
#23HENTAIDOJIPosted 11/28/2013 9:26:09 AM
garcia_jx posted...
So does this mean that the Xbox One won't be able to achieve 1080p gaming due I in large part to the eSRAM?


No. This is the same story as 360. The devtools, specifically for tiling were not ready for launch. Which is why we got PGR 3 at 600p and Ridge Racer 7 at 1080p on PS3 at launches.
---
http://nopybot.com/wp-content/uploads/2010/08/hentai-demotivational1.jpg
#24N3xtG3nGam3rPosted 12/1/2013 2:46:54 AM
HENTAIDOJI posted...
garcia_jx posted...
So does this mean that the Xbox One won't be able to achieve 1080p gaming due I in large part to the eSRAM?


No. This is the same story as 360. The devtools, specifically for tiling were not ready for launch. Which is why we got PGR 3 at 600p and Ridge Racer 7 at 1080p on PS3 at launches.


The one guy states that nobody here knows jack **** about anything hardware related.

3 people walk into topic and begin to explain how the eSRAM will basically be the saving grace of the console. While it might be hard to develop, and learn within say...a 6-8 month period, having 2-3 years to develop a game and master the toolsets (which are tweaked and calibrated between developers and MS engineers) is laughable. These are launch titles. Every game at launch can run current gen hardware.

We truly will be in the next gen, when a game is released, that is simply IMPOSSIBLE to make happen on an x360 or PS3. That will not happen for another year, as I said.

Maybe blocking everyone on this board, one ignorant comment at a time is the way to go. Im afraid If I did that though, there wouldn't be anyone to have a discussion with, good OR bad.

Also, again like another user said, how the hell was the 360, 523mb and 10mb of eDRAM (which is a fraction as much, and a small fraction as fast) as compared to the
---
ASUS p8h61-M (Rev 3.0) | Intel CORE i3 2100 | 8GB Dual-Channel DDr3 | 500GB HDD | 600w PSU | nVidia GTX 770 4GB GDDr5
#25AlxT91Posted 12/1/2013 3:02:07 AM
For those that aren't so sure of what all of this means, I'll sum it up plainly.

eSRAM is necessary because it is how the Xbox One will /try/ to keep up with the PS3's flat out superior DDR5 RAM. eSRAM is much stronger on its own, but the paltry amount isn't going to easily offset the disparity.

This /could/ be why Xbone games are at a lower resolution, it could be the dev tools, it also could be the inferior graphics card. But what is for sure, devs won't be able to /not/ plan around eSRAM if they want a competitive end result.

Which is something most people are overlooking when arguing in favor of eSRAM, it /could/ work out to be better... but takes much, much, much more work. The PS3, for instance, was the strongest console by a WIDE margin; but was so difficult to develop for it really hurt quality for many years into its lifecycle.

But the saddest thing of all, is Microsoft could have had better specs and a lower pricepoint; if they dropped the kinect. All of this difficulty with the system and the corners they cut are all because they're forcing in the silly thing with each and every console.
---
Desktop w/2gb Radeon HD 7870~ 3.6x8GHz AMD FX-8150 (OCd to 4.5GHz)~ 16gb DDR3 ram @1866~ 23.6 ASUS 1080p Monitor
#26yiangaruuugaPosted 12/1/2013 5:24:31 AM
AlxT91 posted...
For those that aren't so sure of what all of this means, I'll sum it up plainly.

eSRAM is necessary because it is how the Xbox One will /try/ to keep up with the PS3's flat out superior DDR5 RAM. eSRAM is much stronger on its own, but the paltry amount isn't going to easily offset the disparity.

This /could/ be why Xbone games are at a lower resolution, it could be the dev tools, it also could be the inferior graphics card. But what is for sure, devs won't be able to /not/ plan around eSRAM if they want a competitive end result.

Which is something most people are overlooking when arguing in favor of eSRAM, it /could/ work out to be better... but takes much, much, much more work. The PS3, for instance, was the strongest console by a WIDE margin; but was so difficult to develop for it really hurt quality for many years into its lifecycle.

But the saddest thing of all, is Microsoft could have had better specs and a lower pricepoint; if they dropped the kinect. All of this difficulty with the system and the corners they cut are all because they're forcing in the silly thing with each and every console.


Actually the Kinect only composed $75 of the price, therefore the Xbox One would still have cost more than the PS4 to make. Meaning MS would be losing even more money per console, then they are now.
---
GT: xFrostxPhoenix
Now Playing: BF3, DC 2 Waiting for: MW3,Skyrim, ACR...Monster Hunter is the **** btw.
#27KolanifvPosted 12/1/2013 6:08:22 AM
Cowboy082288 posted...
http://gamingbolt.com/nitrous-engine-dev-esram-provides-massive-speedup-for-certain-operations-requires-more-work


Therein lies the problem with it. Last gen, almost everything on 360 was coded for PC and ported to 360, then it required a bit more work to convert it to PS3 because the PS3 did things a little different. That extra amount of work led to shoddy ports from developers who didn't want to put the time into it from whatever reason - look at Bayonetta or any Bethesda game. If the XOne is harder to work with, it'll be the system getting the shoddy ports this gen - which we're already seeing.
#28N3xtG3nGam3rPosted 12/2/2013 3:08:55 AM
Kolanifv posted...
Cowboy082288 posted...
http://gamingbolt.com/nitrous-engine-dev-esram-provides-massive-speedup-for-certain-operations-requires-more-work


Therein lies the problem with it. Last gen, almost everything on 360 was coded for PC and ported to 360, then it required a bit more work to convert it to PS3 because the PS3 did things a little different. That extra amount of work led to shoddy ports from developers who didn't want to put the time into it from whatever reason - look at Bayonetta or any Bethesda game. If the XOne is harder to work with, it'll be the system getting the shoddy ports this gen - which we're already seeing.


You obviously don't remember the whole argument between which had the better architecture from a development standpoint.

Who ever designed the PS3 hardware had absolutely no idea what the hell he was doing. The CELL processor was picked up, before production, and a prototype from IBM was what made it into the final machine. However, the lead architect thought that they could completely bypass using a GPU altogether, and have the CPU doing the rendering. The results were terrible.

By then, It was then too late in the game for them to go to nVidia, or ATI (now AMD), and have a special custom chip made for their machine. They were forced to literally use a dumbed down nVidia 7800GT (which was getting close to being almost a year old), and instead of 48 shader pipelines like the PC card had, the one in the PS3 would have 36 (ballpark).

Since they couldn't have a custom chip made, they had to use the segregated memory architecture, 256mb (ultra fast bandwidth for use with CELL), and 256mb GDDR3 for the nVidia GPU. Between the CELL processor forcing asymmetrical type of programming (I want to say you had to individually program each core (SPE) for each specific task: AI, audio, physics...etc--vs symmetrical, where you program the tasks to run on the CPU, and the cores pick up the work-load automatically) and the 256mb limitation to the GPU, not only porting, but developing exclusively was a nightmare.

The 360 had 512mb unified memory architecture, tri-core processor with 2 threads per core, and a custom version of an ATI Radeon GPU that wasn't even out, nor did it come out until AFTER the 360 did. So in every department other than Raw power of the CPU's (completely, and utterly irrellivant) the 360 was not only developer friendly, but was future proofed. The GPU alone partially supported directX10 effects, which eat up a lot of vram, hence why they also added in 10mb of eDRAM, which was capable of sustaining anti-aliasing/post processing effects ''at no cost'' to the rest of the systems performance.

Trying to compare the PS3 developmental disaster to the eSRAM for the X1 is plain stupid. Read up on developers talking about what was different about the CELL programming vs normal CPU programming, the split memory pool, etc.
There are games in development for X1 RIGHT NOW, that are going to use the eSRAM, and use it to its full potential.

Extra effort? Maybe. But it will not in any way shape or form try to ''re-invent the wheel'' as far as programming, or screw any developer like Sony did with their hardware choice for the PS3.

Realistically, and I don't know why people don't think about this more often, but the decision to go with DDR3, and eSRAM, vs GDDR5, makes a hell of a lot more sense when considering the selling points of these new systems. Fast access to dashboard, applications, running multiple applications, instant switching, picture in picture, etc., are all things that GDDR type of ram is not meant for. As of right now, the eSRAM hasn't proved itself, and GDDR5 is established in the PC gaming world.

This WILL change.
---
ASUS p8h61-M (Rev 3.0) | Intel CORE i3 2100 | 8GB Dual-Channel DDr3 | 500GB HDD | 600w PSU | nVidia GTX 770 4GB GDDr5