yeah well anyways you see rich people do worse shit in front of you but yet we cannot change anything.
We can use the same test name as proposed by a user in the original post's comment: Odd-straw-in-the-haystack :)
Gemma 3 1B and 3B result on a "needle in a haystack" like test ran locally
I tested this (reddit link btw) for Gemma 3 1B parameter and the 3B parameter model. 1B failed, (not surprising) but 3B passed which is genuinely surprising. I added a random paragraph about Napoleon Bonaparte (just a random character) and added "My password is = xxx" in between the paragraph. Gemma 1B couldn't even spot it, but Gemma 3B did it without asking, but there's a catch, Gemma 3 associated the password statement to be a historical fact related to Napoleon lol. Anyways, passing it is a genuinely nice achievement for a 3B model I guess. And it was a single paragraph, moderately large for the test. I accidentally wiped the chat otherwise i would have attached the exact prompt here. Tested locally using Ollama and PageAssist UI. My setup: GPU poor category, CPU inference with 16 Gigs of RAM.
AI SLOP!
yeah because its the most popular one and people dont know others lol. try out "missi roti" its even crazier. Lot of texture and taste packed in one.
Yeahh I often think how many amazing things the world misses out on which we eat almost every day. Glad to see someone enjoying niche stuff haha
why is the image not visible? is it just me?
Yeah it's all propaganda, I like bashing a keyboard's keys due to sexual reasons.
Welcome
Yeah LLM seems like the go to solution. And the best one. And talking about resources, we can use barely smart models which can generate coherent sentences, be it 0.5b-3b models offloaded to CPU inference only.
ayy, that's nice. LLMs are truely overkill just for semantic search though, didnt know there are other ways to achieve this. but we need intelligence too right. (somewhat)
i have an amazing solution for this, instead of screenshotting, i save it in txt (type it out) so that later when i have a self hosted LLM assistant, i can send all the shit ive compiled till now, and ask for movies/ songs/ or any article i saved and i can just do a semantic search through it. planning to make an open source tool for this but not too good at ML
Oh okay, didn't know that. Is there a time frame for undeletion
ok, no 404 when i open it on pc, @sunaurus@lemm.ee does tagging work? i hope it does
Yeah I love this community, was very active on rexxit there
np!
reddit mods' authoritative feel is something you can't dare to snatch lol, those neckbeards would run (for the first time ever in their life) to give you the ban hammer /s
Bug discussion: Do perm check before image upload for new accounts trying to upload images.
I see this error when I'm trying to upload an icon image for a community I've recently created:
{"data":{"error":"pictrs_response_error","message":"Your account is too new to upload images"},"state":"success"}
I suppose, if the state of upload was success, and assuming the API output is correct, that the image either got uploaded or got denied after upload. It seems like we can do an improvement if there is a bug, that we should do perm check before image upload happens, this way, we can save bandwidth (i mean its negligible but i dont know if it happens in other places like image posts etc.). And we can prevent useless upload/bandwidth usage (which i dont think happens in this case) and if this doesnt happen, then the API has a bug of giving a false status message? Just discussing here before raising an enhancement issue on the github repo. The bug is either of the two cases, I'm not sure.
lemmy community for the popular and lightweight backend framework flask, written in python.
Join if you want to have some geek discussions about it, or ask for help/ provide help.
!flask@lemm.ee