Hacker Newsnew | past | comments | ask | show | jobs | submit | mdre's commentslogin

That would be both feature completeness of the implementation and also some kind of a native USD authoring mode (editing the structure and layering multiple USD files properly), like Houdini Solaris or Nvidia Omniverse. For now Blender can’t even read/write MaterialX properly. (I’m not an insider though)


What’s really groundbreaking is the amount of ignorance displayed in this video. Also, I’m curious how long will it take for blender to reach performance parity with OpenGL. Houdini has been taking a few years now and VK is still 2x slower then opengl apparently.


> Houdini has been taking a few years now and VK is still 2x slower then opengl apparently

Ouch, since OpenGL isn't exactly known for being performant to begin with.

This sounds a lot like Houdini has its core rendering code designed around the granular 'state-soup model' of OpenGL and is now trying to emulate that same behaviour on top of an API which expects that all state combinations are baked upfront into immutable objects (which in some situations - especially in DCC tools - may turn out to be impossible, because they may not be able to predict all required state combinations - or the number of required state combinations is simply to large too create upfront).

There's quite recent extensions for Vulkan (VK_KHR_dynamic_rendering, VK_EXT_ extended_dynamic_state and VK_shader_object) which try to break the extreme rigidity of the Vulkan 1.0 API and go back to a more OpenGL like dynamic state soup (which IMHO is going too far, because GL's granular state soup is also its biggest problem - the best compromise is somewhere in the middle, see D3D11 or Metal) - but it might help moving code from GL to Vulkan without having to discard and create pipeline objects all the time.


Yes.


how so?


P and B frames are compressed versions of a reference image. Frames resulting from DLSS frame generation are predictions of what a reference image might look like even though one does not actually exist.


But MPEG is lossy compression which means they are kind of a just a guess. That is why MPEG uses motion vectors.

"MPEG uses motion vectors to efficiently compress video data by identifying and describing the movement of objects between frames, allowing the encoder to predict pixel values in the current frame based on information from previous frames, significantly reducing the amount of data needed to represent the video sequence"


There's a real difference between a lossy approximation as done by video compression, and the "just a guess" done by DLSS frame generation. Video encoders have the real frame to use as a target; when trying to minimize the artifacts introduced by compressing with reference to other frames and using motion vectors, the encoder is capable of assessing its own accuracy. DLSS fundamentally has less information when generating new frames, and that's why it introduces much worse motion artifacts.


it would be VERY interesting to have actual quantitative data on how many possible I video frames map to a specific P or B frame vs how many possible raster frames map to a given predicted DLSS frame. The lower this ration the more "accurate" the prediction is.


Compression and prediction are the same. Decompressing a lossy format is guessing how the original image might have looked like. The difference between fake frames and P and B frames is that the difference between prediction of fake frame and real frame is dependant on the user input.

... now I wonder ... Do DLSS models take mouse movements and keypresses into account?


What's your idle power consumption for AMD vs Intel if you don't mind me asking? I'm getting avg 125W for my 13900k build, measured at the wall and it mildly bugs me when I think of it, I thought it'd be closer to 80. And power is very expensive where I live now.


7950X3D, 96G, 18TBx4, 4TB NVMe x2 my GPUs are gtx1080, rx570 and the 7950x3d, FSP 1000W ATX3 platinum

I use proxmox as my OS. I have a truenas VM with passed through storage. I have a few VMs and a couple of gaming VMs (Bazzite, Fedora, NixOS)

After boot idle is around 180-200W because the GPUs don't sleep. After VMs runnning with GPUs this goes down to 110W. My drives don't spin down so thats around 20W.


If you are getting 125W at the wall on a PC at idle, your machine or operating system is extremely broken, or you are running atmosphere physics simulations all the time. The SoC on my Intel box typically drew < 1W as measured by RAPL. The 9950X draws about 18W measured the same way. Because of platform overhead the difference in terms of ratio is not that large but the Ryzen system is drawing about 40W at the wall when it's just sitting there.


Discrete gpu can easily add 20-40w of idle power draw, so that's something to keep in mind. I believe that 60ish watts is pretty typical idle consumption for desktop system, Ryzens typically having 10w higher idle draw than Intel. Some random reviews with whole system idle measurements:

https://hothardware.com/reviews/amd-ryzen-7-9800x3d-processo...

https://www.techpowerup.com/review/amd-ryzen-7-9800x3d/23.ht...


Those comparisons are using a water cooling rig which already blows out the idle power budget. 60W is in no way typical of PC idle power. Your basic Intel PC draws no more power than a laptop, low single digits of watts at the load, low tens of watts at the wall. My NUC12, which is no slouch, draws <5W at the wall when the display is off and when using Wi-Fi instead of Ethernet.


Hmm. I’m using an AIO cooler, a 3090 and a 1600W platinum psu - might be a bit inefficient. I remember unplugging the PSU and 3090 and plugging in a 650W gold PSU — the system drew 70W IIRC. That’s a wild difference still!


Yeah, oversized power supplies are also responsible for high idle power. "Gold" etc ratings are for their efficiency at 50%-100% rated power, not how well they scale down to zero, unfortunately. I have never owned a real GPU, I use the IGP or a fanless Quadro, so I don't have firsthand experience with how that impacts idle power.


Gold rating is down to 20%, Titanium is to 10% https://en.wikipedia.org/wiki/80_Plus#Efficiency_level_certi...


Platinum is 90% efficient at 20%, but OP is using a 1600W power supply, so that 20% is 320W. Any load below that is going to be less efficient.


This is cool and all, but it's been 10 years since 4K monitors. Where are (affordable) DUHD ultrawides and why there is not a single 42 inch 6K or 8K panel in the entire world?


There's not a mainstream use case for 4k content. Show watchers don't care enough about the quality improvement to pay their streaming provider $5 more for the 4k stream.

Even the gaming enthusiast market has rejected 4k, opting to spend their precious gpu cycles on refresh rate improvements instead.

Panel marketers are going to figure out what number consumers comprehend and make it bigger regardless. Hence their focus on 4k, 8k, and now these silly refresh rates.

But their big problem is that consumer behavior is currently bottlenecked by streaming bandwidth and GPU cycles. Two things panel manufacturers have no control over.

Buy a nice 1440p @ 120 Hz panel and ride out the next decade in comfort.


Thanks for those datapoints. I've checked out of the screen race for over a decade now (around the time 3D glasses were a thing) and have not really kept up.

The displays look incredible, but I was always wondering if that was just me or if other people really bothered to keep up with the latest tech to have a slightly better viewing experience.


> Show watchers don't care enough about the quality improvement to pay their streaming provider $5 more for the 4k stream.

I'd say it's more about show producers and safety. They don't want audience to see specks of coke in the host's nose.


Well, you could change your user agent string.


This hasn't worked well in a decade or more.

I used to pretend to be Googlebot a lot to bypass paywalls but I think these days there is either IP range filtering or more advanced fingerprinting.


Was this at Ubisoft maybe? I remember their presentations, some really amazing tech a decade ago... Never seen anything as straightforward to use in off-the-shelf software since.


It wasn't, my friend certainly wasn't the first to do something similar, this was much more recent. I was just impressed by a demo he showed me of his work. He did procedural rigging for Darktide 40k, and physics based animations for The Finals. Unfortunately I don't think his software is available outside of the companies that hired him.


I'd say it's even worse, since for rendering Optix is like 30% faster than CUDA. But that requires the tensor cores. At this point AMD is waaay behind hardware wise.


Fun fact: ZLUDA means something like illusion/delusion/figment. Well played! (I see the main dev is from Poland.)


You should also mention that CUDA in Polish means "miracles" (plural).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: