50 million rendered polygons vs one spicy 4.2MB boi
50 million rendered polygons vs one spicy 4.2MB boi
50 million rendered polygons vs one spicy 4.2MB boi
Maybe it's time we invent JPUs (json processing units) to equalize the playing field.
The best I can do is an ML model running on an NPU that parses JSON in subtly wrong and impossible to debug ways
Just make it a LJM (Large JSON Model) capable of predicting the next JSON token from the previous JSON tokens and you would have massive savings in file storagre and network traffic from not having to store and transmit full JSON documents all in exchange for an "acceptable" error rate.
So you're saying it's already feature complete with most json libraries out there?
Latest Nvidia co-processor can perform 60 million curly brace instructions a second.
Finally, something to process "databases" that ditched excel for json!
60 million CLOPS? No way!
Until then, we have simdjson https://github.com/simdjson/simdjson
JSON and the Argonaut RISC processors
Well, do you have dedicated JSON hardware?
Please no, don't subsidize anything Java-Script. It will only make it less efficient.
And thus JsPU was born from Lemmy
My thoughts on software in general over the past 20 years. So many programs inefficiently written and in 4th level languages just eats up any CPU/memory gain. (Less soap box and more of a curious what if to how fast things would be if we still wrote highly optimized programs)
@Randelung @seaQueue well, i have dedicated JavaScript hardware (https://developer.arm.com/documentation/dui0801/h/A64-Floating-point-Instructions/FJCVTZS)
The R in ARM and RISC is a lie.
At this point ARM is a CISC architecture
You don't?
There were XML DOM accelerators for a while. Might still be out there.
Everybody gangsta still we invent hardware accelerated JSON parsing
https://ieeexplore.ieee.org/document/9912040 "Hardware Accelerator for JSON Parsing, Querying and Schema Validation" "we can parse and query JSON data at 106 Gbps"
I'm so impressed that this is a thing
106 Gbps
They get to this result on 0.6 MB of data (paper, page 5)
They even say:
Moreover, there is no need to evaluate our design with datasets larger than the ones we have used; we achieve steady state performance with our datasets
This requires an explanation. I do see the need - if you promise 100Gbps you need to process at least a few Tbs.
There is acceleration for text processing in AVX iirc
Personally, now that I have a machine capable of running the toolchains, I want to explore hardware accelerated compilation. Not all steps can be done in parallel but I bet a lot before linking can.
Render the json as polygons?
It's time someone wrote a JSON shader.
Ray TraSON
I just added this to my linked in profile. Thanks!
That just results in an image of JSON Bourne.
JSON Sphere
That is sometime the issue when your code editor is a disguised web browser 😅
No, if you're struggling to load 4.2 mb of text the issue is not electron.
there are simd accelerated json decoders
every day we stray further from god
Don't worry, they still make extensive use of regexes.
CPU vs GPU tasks I suppose.
GPU, render my 4.2 MB json file!
I'm afraid I can't do that, Dave
Would you rather have 100,000 kg of tasty supreme pizza, or 200 kg of steaming manure?
Choose wisely.
200kg of steaming manure would be pretty sweet if you had a vegetable garden
Not sure if I'm just missing a reference here, but if you choose the pizza you can have both.
Not a day or two from harvest.
Not sure I'd chose to use the word "sweet" here...
Careful, the 100,000 kg of pizza will turn into manure.
I figure I can probably convert about 10 kg into manure before it autoconverts into compost. Which is maybe even a worse problem.
The pizza can be used to feed some people but you really have to go fast and find hungry people
Manure can be sold easily
I have the same problem with XML too. Notepad++ has a plugin that can format a 50MB XML file in a few seconds. But my current client won't allow plugins installed. So I have to use VS Code, which chokes on anything bigger than what I could do myself manually if I was determined.
Time to train an LLM to format XML and hope for the best
Do we need a "don't parse xml with LLM" copypasta?
Meanwhile, I can open a 1GB file in (stock) vim without any trouble at all.
Formatting is what xmllint
is for.
I use vim macros. You can do some crazy formatting with it
Just install python and format it. Then
You don't need to open a file in a text editor to format it
Someone just needs to make a GPU-accelerated JSON decoder
Works fine in vim
Except if it's a single line file, only god can help you then. (Or running prettier -w
on it before opening it or whatever.)
cat file.json | jq
also works
4.2 megs on one line? Vim probably can handle it fine, although syntax won't be highlighted past a certain point
:syntax off
and it works just fine.
Reject MB, embrace MiB.
Reject MiB, call it "MB" like it originally was.
If you're not aware, it was called MB because of JEDEC when IEC units weren't invented. IEC units were introduced because they remove the double meaning of JEDEC units — decimal and binary. IEC units only carry the binary meaning, hence why they're superior. If you convert 1000 kB to 1 MB then use MB, but in case of 1024 KiB to 1 MiB you should be using MiB. It's all about getting the point across, and JEDEC units aren't good at it.
You've got them confused, MiB is the one misusing metric
It isn't misusing metric, it just simply isn't metric at all.
Rockstar making GTA online be like: "Computer, here is a 512mb json file please download it from the server and then do nothing with it"
Let it be known that heat death is not the last event in the universe
You jest, but i asked for a similar (but much simpler) vector / polygon model, and it generated it.
The obvious solution is parsing jsons with gpus? maybe not...
wow wouldn't guess gpu architecture is compatible with parsing tasks
C++ vs JavaScript
it's more like gpu vs CPU
Given it is a CPU is limiting the parsing of the file, I wonder how a GPU-based editor like Zed would handle it.
Been wanting to test out the editor ever since it was partially open sourced but I am too lazy to get around doing it
That's not how this works, GPUs are fast because the kind of work they do is embarrassingly parallel and they have hundreds of cores. Loading a json file is not something that can be trivially parallelized. Also, zed use the gpu for rendering, not reading files.
I'd like to point out for those who aren't in the weeds of silicon architecture, 'embarrassingly parellel' is the a type of computation work flow. It's just named that because the solution was an embarrassingly easy one.
i hate to break it to you bud but all modern editors are GPU based
As far as my understanding goes, Zed uses the GPU only for rendering things on screen. And from what I've heard, most editors do that. I don't understand why Zed uses that as a key marketing point
To appeal to people who don't really understand how stuff works but think GPU is AI and fast