Skip Navigation

Do you think Google execs keep a secret un-enshittified version of their search engine and LLM?

The Internet being mostly broken at this point is driving me a little insane, and I can't believe that people who have the power to keep a functioning search engine for themselves wouldn't go ahead and do it.

I wonder about this every time I see people(?) crowing about how amazing AI is. Like, is there some secret useful AI out there that plebs like me don't get to use? Because otherwise, huh?

113 comments
  • This is really the fear we should all have. And I've wondered about this specifically in the case of Thiel, who seems quite off their rocker.

    Some things we know.

    Architectural, the underpinnings of LLM's existed long before the modern crops. Attention is all you need is basic reading these days; Google literally invented transformers, but failed to create the first llm. This is important.

    Modern LLM's came through basically two aspects of scaling a transformer. First, massively scale the transformer. Second, massively scale the training dataset. This is what OpenAI did. What google missed was that the emergent properties of networks change with scale. But just scaling a large neural network alone isn't enough. You need enough data to allow it to converge on interesting and useful features.

    On the first part, of scaling the network. This is basically what we've done so far, along with some cleverness around how training data is presented, to create improvements to existing generative models. Larger models, are basically better models. There is some nuance here but not much. There have been no new architecural improvements that have resulted in the kind of order of magnitude scaling in improvement we saw in the jump from lstm/GAN days, to transformers.

    Now what we also know, is that its incredibly opaque what is actually presented to the public. Open source models, some are in the range of 100's of billions of parameters Most aren't that big. I have quen3-vl on my local machine, its 33 billion parameters. I think I've seen some 400b parameter models in the open source world, but I haven't bothered downloading them because I can't run them. We don't actually know how many billion parameters models like Opus-4.5 or whatever shit stack OpenAI is sending out these days. Its probably in the range of 200b-500b, which we can infer based on the upper limits of what can fit on the most advanced server grade hardware. Beyond that, its MoE, multiple models on multiple GPU's conferring results.

    What we haven't seen is any kind of stepwise, order of magnitude improvement since the 3.5-4 jump open AI made a few years ago. Its been very.. iterative, which is to say, underwhelming, since 2023. Its very clear that an upper limit was reached and most of the improvements have been around QoL and nice engineering, but nothing has fundamentally or noticeably improved in terms of the underlying quality of these models. That is in and of itself interesting and there could be several explanations of this.

    Getting very far beyond this takes us beyond the hardware limitations of even the most advanced manufacturing we currently have available to us. I think the most a blackwell card has is ~288GB of VRAM? Now it might be at this scale we just don't have hardware available to even try and look over the hedge to see what or how a larger model might perform. This is one explanation: we hit the memory limits of hardware and we might not see a major performance improvement until we get into the TB range of memory on GPU's.

    Another explanation, could be that at the consumer level, they stopped throwing more compute resources at the problem. Remember the MoE thing? Well these companies, allegedly, are supposed to make money. Its possible that they just stopped throwing more resources at their product lines, and that more MoE does actually result in better performance.

    In the first scenario I outlined, executives would be limited to the same useful, but kinda-crappy LLM's we all have access to. In the second scenario, executives might have access to super powered, high MoE versions.

    If the second scenario is true and when highly clustered, llm's can demonstrate an additional stepwise performance improvement, then we're already fucked. But if this were the case, its not like western companies have a monopoly on GPUs or even models. And we're not seeing that kind of massive performance bump elsewhere, so its likely that MoE also has its limits and they've been reached at this point. Its also possible we've reached the limits of the training data. That even having consumed all of 400k's years of humanities output, and its still too dumb to draw a full glass of wine. I don't believe this, but it is possible.

  • Nah, all the SEO nobheads poisoned search well before Google managed it.

    The internet has been inventing nonsense based seemingly on your search queries for a long-ass time.

  • I think yes, and no. There are certainly in-house tools that the outside folks don't get. LLMs for sure have better tiers and loosened guardrails.

    ...buuuuut, the people at an 'executive' level also are entirely unlike you and me. They are simultaneously as gullible and foolish as the 'sheep' of society, who are also buying into the 'AI' hype of LLMs, and so far removed from our situation that even using an LLM or search engine is entirely outside of their experience. They aren't going to be using an LLM to plan out a vacation or a work schedule and have it fail any more than they would have looked through a SEO optimized bullshit website about vacuum cleaners (or super slideshow-ified list of 'top ten pacific vacations!' website to show you a bunch of ads) five years ago. They'll ask the LLM (/search engine and only look at the ai at top) for the best pacific vacations and then tell their assistant to plan a vacation for them based on a quick glance at the result (or the same for the vacuum cleaner to replace the one that broke when their house cleaner was trying to get the super long hair from the super fru-fru breed that's only allowed in two rooms in the house out of the super luxurious thick rug).

    The way they use the LLM is perfectly fine for them. They aren't going to see any negatives from it, so the in-house or publicly available versions aren't really the reason for their ability to 'crow' about it. Same for the general downtrend of the internet. Their use case fucking sucks, and it isn't affected.

113 comments