Junior Prompt Engineering
Junior Prompt Engineering
Junior Prompt Engineering
Tech guy invents the concept of giving instructions
With clear requirements and outcome expected
Why did no one think of this before
Who does that? What if they do everything right and it doesn't work and then it turns out it's my fault?
It would be nice if it was possible to describe perfectly what a program is supposed to do.
Yeah but that's a lot of writing. Much less effort to get the plagiarism machine to write it instead.
Ha
None of us would have jobs
I think the joke is that that is literally what coding, is.
Who even makes these comics? Is it like Simpsons
Web browsing 101: if you see a hyperlink on social media, you can click on it and then look around to see if it contains more links with useful information, often in the header or footer of the page. Here I found one for you: https://xkcd.com/about/
OP just chatting with themselves so they can screenshot it?
That is some telegram group and both messages shows from left with profile icons(which got cropped). The screenshot person sent the last message which shows double ticks
In the desktop client the positions of bubbles also depend on the width of the window.
Great attention to detail!
That's just a fake conversation in general, look at the timestamps between the messages from the interlocutor. Several minutes to type a complete sentence?
Hey, i can take a few hours to reply sometimes :c
Could be a group chat but we all know they're a twat
I wrote a shell script like this (it admin , notna dev) for private use.
The prompt took me like 5 hours of rewriting the instructions.
Don't even know yet if it works (lol)
Neural network: for when saying LLM doesn't sound smart enough
It's just what it was called in the nineties.
LLMs are a type of neural networks.
Calling GPT a neural network is pretty generous. It's more like a markov chain
it legitimately is a neutral network, I'm not sure what you're trying to say here. https://en.wikipedia.org/wiki/Generative_pre-trained_transformer
You're right, my bad.
I've played with markov chains. They don't create serious results, ever. ChatGPT is right just often enough for people to think it's right all the time.
The core language model isn't a nueral network? I agree that the full application is more Markov chainy but I had no idea the LLM wasn't.
Now I'm wondering if there are any models that are actual neutral networks
I'm not an expert. I'd just expect a neural network to follow the core principle of self-improvement. GPT is fundamentally unable to do this. The way it "learns" is closer to the same tech behind predictive text in your phone.
It's the reason why it can't understand why telling you to put glue on pizza is a bad idea.