50 million rendered polygons vs one spicy 4.2MB boi
50 million rendered polygons vs one spicy 4.2MB boi
50 million rendered polygons vs one spicy 4.2MB boi
You're viewing a single thread.
Given it is a CPU is limiting the parsing of the file, I wonder how a GPU-based editor like Zed would handle it.
Been wanting to test out the editor ever since it was partially open sourced but I am too lazy to get around doing it
That's not how this works, GPUs are fast because the kind of work they do is embarrassingly parallel and they have hundreds of cores. Loading a json file is not something that can be trivially parallelized. Also, zed use the gpu for rendering, not reading files.
I'd like to point out for those who aren't in the weeds of silicon architecture, 'embarrassingly parellel' is the a type of computation work flow. It's just named that because the solution was an embarrassingly easy one.
Huh, I was about to correct you on the use of embarrassment in that the intent was to mean a large amount, but it seems a Wiki edit reverted it to your meaning a year ago, thanks for making me check!
i hate to break it to you bud but all modern editors are GPU based
As far as my understanding goes, Zed uses the GPU only for rendering things on screen. And from what I've heard, most editors do that. I don't understand why Zed uses that as a key marketing point
To appeal to people who don't really understand how stuff works but think GPU is AI and fast