Clearly, nvidia did get the lead with ray tracing implementation, with its graphics cards, first of all, with touring and now ampere. So this has led to a couple of things. One is that developers are quite familiar with the ray tracing implementation of nvds architecture, and the second is that well, technically speaking, nvidia are on their second generation of ray tracing, whereas with amd. Well, things are obviously the reverse. However, there is still a lot of headroom in ray tracing performance for rd a2. As i recently discussed in a video. However, there is a bit of a follow up to this and it’s actually prompting me to in the future to do a lot of testing currently i’m finishing off the mesh shader video. Now that nvidia have given me all of the quotes i’m good to go. Hopefully, that video should be up tomorrow, there’s no reason for it not to be and i’m actually really excited, because i think it’s, a really good video, although possibly i’m biased, so do get subscribed for that. If you want to know a ton of information about mesh shaders anyway, getting back to the point, he says: circling um, with the ray tracing implementation of rd and a2. What we can see is a drastic uptick in performance if we change the way that waves are issued on the gpu and also better implementation of vgprs, or rather they are utilized more efficiently, that’s a better way to say that – and this was discussed by philip on Twitter i’ll link the original thread in the description of this video.

Along with my video, where i covered the topic, however, there is an update again thanks to philip and it’s, actually making me really want to do some ray tracing investigation of my own on our dna too, but let’s have a look at his thread. So anyone remember the thread that blew up way beyond what i intended. He said the patch set mentioned doesn’t do anything more with driver 21.3.1. It automatically uses wave32, so amd is working on it. Now the compiler just needs to improve the vgpr utilization, which again are general purpose. Registers um and philip also provides us a benchmark where well, you can see yourself. There is a small but tangible improvement in the ray tracing performance of metro exodus, which is utilizing the same gpu in both tests, along with a 5800 x cpu from amd. So what does all of this mean? Well, i don’t believe that our dna 2 is ever going to catch up with ampere for ray tracing. Nvidia’S architecture is just faster when it comes to ray tracing performance. However – and this is my personal opinion – i’d – be very happy to be proven wrong if developers in let’s say six months of a real handle on amd’s hardware. But what i do think is that we will see a gradual improvement for ray tracing on amd’s gpus for desktop. But, furthermore i wonder what this is going to mean too because of the consoles. So both the playstation 5 and the xbox series are essentially using amd’s implementation of hardware based ray tracing and while of course the desktop and consoles will probably use ray tracing differently, i’m sure with pc.

You can adjust the settings how you want. It will naturally well trick be a trickle down effect so i’m, not saying that this is going to mean developers are only going to focus on amd optimization, but i do think it’s going to be a side effect, and this is probably one of the reasons that Nvidia are being ultra aggressive at the moment, pushing its own technology, such as, for example, dlss, as we recently discovered, amd’s own fidelity, fx super sharpening or fsr, if you prefer, which is rather different in its approach to am to nvidia’s. Excuse me dlss. I think that um ray tracing for amd’s next generation, which is going to be rdna free, which is already in engineering samples and should release i’m hearing about midpoint of next year at the latest. There should be a tangible, drastic improvement there, but it will be very curious to see how ray tracing evolves, particularly because intel are pushing it too, with their xc architecture. There’S a lot of cool stuff, i feel with graphics at the moment. I definitely feel that we’re in this tight transition phase with tech such as variable rate, shading, mesh shaders hardware based ray tracing, a ton of other things – i mean at the moment, sampler feedback, isn’t, even a thing for pc gpus and of course, that is going to Be changing pretty soon so yeah, my personal opinion is it’s going to be really cool. As i mentioned a moment ago, i am very uh keen to do some ray tracing testing now on uh amd’s rd and a2 clash gpus, and i think it will be very interesting to see what happens with games coming out.

Let’S say six months from now to see how their relative performance stacks up against their nvidia counterparts. It should be extremely exciting, i think, but moving from amd’s gpus well let’s discuss a scavenger hunt. Shall we let’s talk about intel, xc and no i’m? Not joking on the scavenger hunt, although it might sound like i am intel, are now doing a lot of teasing for the xc architecture, and while this is not super exhaustive information about that architecture, i do think it’s at least worth noting, since well, at least in My opinion that endeavors in the gpu market are actually quite interesting, so there is a very small teaser video concerning intel’s xc architecture and it’s actually been cracked by someone on wccf tech. I believe their name is. I don’t want to get this wrong duck of death, which is a fantastic name and if you basically uh crack the little teaser it basically due to an ip address, which then routes you to an official intel website, which is by basically promoting their gpus. So on the 26th of march, so not too long at all now about just under a week, intel will be starting to host a scavenger hunt, as they basically start promoting their xc architecture. Unfortunately, we don’t know details such as the release date. Yet, however, judging by the specifications of dg2, which have leaked 512 execution units and either 8 or 16 gigabytes of ram, we can presume that these gpus are going to be pretty darn performant.

I don’t think that they’re going to outperform the rtx 3080 ti, which, as i’ve leaked, is going to be released by end of next month. It’S had a slight delay by a week, or so was originally intended to be released, uh mid april, but it’s now releasing. It looks like last week of april um anyway. Getting back to the point. I do think, though, that the performance of this card is going to be semi decent, and it really does come down to two things for me. Well, actually, three in this market, one is the availability. Two is the pricing, and the third is the software support. Now, assuming intel can have the availability down to pat, get it pat given i’m. Sorry that was a terrible joke anyway, um yeah, assuming they can get the availability sorted out and the performance is let’s, say around rtx 3070 i’m hearing. It might actually outperform the rtx 3070, but personally i’m, going to air on the side of caution and say it’s going to be around the 30 70, possibly a little bit slower and yeah. It could be a really nice product, i’m going to remain somewhat skeptical. Until i get my hands on one of these things but i’m hopeful, i really am it’s going to be curious to me not just about the performance, though about other things too, such as the heat and power consumption and all of their chairs, but assuming it’s fully Featured which again, given the leaks we’ve seen, it seems to be i’m actually quite hopeful, and i think that it would really help alleviate the the shortages in the market, and you know, there’s a lot of discussion at the moment.

Now that amd are gobbling up more capacity over its um tsmc with the 7nm capacity. Obviously, um apple are starting to reduce their reliance on it as they’re shifting to new processes. But you’ve got to remember that those fabs take quite a long time to change over it’s. Not like you know you just click. Your fingers and bam amd have just gobbled it up. It can take months for this to really come into make any meaningful difference is what i’m trying to say to the the pipeline and it’s like breaking in new fabs as well. It could be like three five years time that we actually start to see any improvement. There so i think that there is a lot of positivity on the improvement for uh availability of products, but i think that intel actually, if it is this year, it could have a really meaningful difference in the gpu market but that’s if availability is this year and That they can get a decent quantities of the gpu and if their price is uh, you know competitive and if their performance is competitive and if their software isn’t like completely broken so there’s a lot of ifs but i’m going to give them the benefit of the Down um, because i have to say that i am very hopeful of the situation, but one last thing for intel, and this is older lake. I know that rocket lake has been about as popular as me, telling you well you’re gon na have to carry this stone across the desert and also there’s going to be scorpions, attacking you and also i’m going to kick your shin every three or four steps.

Oh and also you’re going to be walking not across you know a desert sand it’s going to be a desert of legos, but yeah um. I have to say that and like could be fairly decent and we actually have an exclusive from videocards.com i’ll, of course, link the article in the video description so with intel’s um older, like there are several things that we can discern from this particular image. Well, actually, image is to be precise and again full credit to videocards.com. For this they are stating that we could be seeing up to 20 uh performance and single fred. Get to that in just a moment. Up to two times increase in multi thread performance, new gracemont, cores and hardware guided scheduling. I find that particularly interesting, improved sock power, including energy, aware core parking latest. I o for gen 5 and gen 4 connectivity such as uh wifi and thunderbolt 4, blah blah blah. Who cares well people care, but in this instance i don’t full memory support both ddr4 and 5 are supported, along with the lp derivatives and finally, in the next one. You can see some platform information and here it’s, two channel ddr5, which i find interesting. Given the bitrate of ddr4 and v, but well yeah, we’ll, wait on that of course, it’s all about the performance, uh pcie gen, five, which is excellent and there’s. Also, of course, the platform i’m not going to read out the platform information, because you can see it yourself and yeah.

Obviously, this is on an entirely new socket as well it’s, lga, 1700 and in terms of the ipc. I have a couple of things. So i’ve leaked multiple times, i’ve discussed multiple times. You know, tons of people have said it’s around twenty percent gain for older lake and it seems to be proven here with up to um yeah up to is always an interesting word. You know a term, i guess like up to what what workloads, what situations is it going to be, in best case scenarios that we’re seeing 20 and also 20 from what are they referring to the you know, base architecture being skylake? Is it going to be rocket lake architecture? I don’t know um with that said, i do feel that intel can be competitive and while rocket lake has certainly become a meme and frankly it’s hard to not mean it. I do feel that certain skews are at least decent. The 11 400 is decently priced it’s, like i can’t, remember exactly 150 pounds, something like that for a six core part, and yes, arguably zenfree is better for a lot of people. However, the 5600x is considerably more expensive, so it could be decent. If you want a new platform, however, the only thing is 10th gen parts now are also quite cheap from intel. What i’m trying to say is that rocket lake isn’t awful it’s most of the power consumption that, in my opinion, is kind of letting it down.

However, with all the lake with a decent ipc bump, assuming we’ve got the clock frequencies, possibly it will be enough to be competitive with intel. My personal opinion is that amd will probably still uh maintain their leadership in especially multi threaded workloads. However, i do think it should at least make intel competitive, and i think that it’s, like the architecture, you know one to two architectures after that that intel really start to hit back, but hopefully so because, honestly, it’s not fun to just have a one sided race. That’S my personal opinion with that said, thank you very much for checking out the video if you’ve enjoyed it well. Of course, you know what to do subscribe to the channel, of course, and click.

https://www.youtube.com/watch?v=Jpmm6V0VWNc