Skip Navigation

Discuss.Online reduces infrastructure: Here is an update on how the server infrastructure changed

Word never really got out about Discuss.Online which was set up to handle a huge influx on signups. But the signups haven't materialized. Here's what the admin has to say.

cross-posted from: https://discuss.online/post/198448

Timeline and reasoning behind recent infra changes

Recently, you may have noticed some planned outages and site issues. I've decided to scale down the size and resilience of the infrastructure. I want to explain why this is. The tl;dr; is cost.

Reasons

  • I started discuss.online about 4 weeks ago. I had hoped that the reaction to Reddit's API changes would create a huge rush to something new, for the people, by the people; however, people did not respond this way.
  • I built my Lemmy instance like any other enterprise software I have worked on. I planned for reliability and performance. This, of course, costs money. I wanted to be known as the poster child for how Lemmy should operate.
  • As I built out the services from a single server instance to what it became the cost went up dramatically. I justified this assuming that the rush of traffic would provide enough donors to supplement the cost for better performance and reliability.
  • The traffic load on discuss.online is less that extraordinary. I've decided that I've way over engineered the resilience and scale. Some SubReddits that had originally planned to stay closed decided to re-open. I no longer needed to be large.
  • The pricing of the server had gotten way out of control. More than the cost of some of the largest instances in Lemmy while running a fraction of the user base.

Previous infrastructure

  • Load balancer (2 Nodes @ $24/month total)
  • Two front-end servers (2 Nodes @ $84/month total)
  • Backend Server (1 Node @ $84/month total)
  • Pictures server (1 Node @ $14/month total)
  • Database (2 Nodes @ $240/month total)
  • Object Storage ($5/month + Usage see: https://docs.digitalocean.com/products/spaces/details/pricing/)
  • Extra Volume Storage ($10/month)
  • wiki.discuss.online web node ($7/month)
  • wiki.discuss.online database node ($15/month) [Total cost for Lemmy Alone: $483 + Usage]

Additionally:

  • I run a server for log management that clears all lots after 14 days. This helps with finding issues. This has not changed. ($21/month)
  • Mastdon server & DB ($42/$15/+storage ~ $60 total/month)
  • Matrix server & DB ($42/$30/+storage ~ $75 total/month)

Total Monthly server cost out of pocket: ~$640/month.

The wiki, Mastodon, Matrix, & log servers all remained the same. The changes are for Lemmy only and will be the focus going forward.

First attempt

As you can see it was quite large. I've decided to scale way down. I attempted this on 7/12. However, I had some issues with configuration and database migration. That plan was abandoned. This is what it looked like:

Planned infrastructure

  • Single instance server (1 Node @ $63/month total)
    • Includes front-end, backend, & pictures server.
  • Database server (1 Node @ $60/month total)
  • Object Storage ($5/month + Usage)
  • Extra Volumes ($20 / month total)

[Total new cost: ~$150 + Usage]

Second attempt

I had discovered that the issues from the first attempt were caused by Lemmy's integration with Postgres. So I decided to take a second attempt. This is the current state:

Current infrastructure

  • Single instance server (1 Node @ $63/month total)
    • Includes front-end, backend, & pictures server.
  • Database server (1 Node @ $60/month total)
  • Object Storage ($5/month + Usage)
  • Extra Volumes ($20 / month total)
  • wiki.discuss.online web node ($7/month)
  • wiki.discuss.online database node ($15/month)

[Total new cost for Lemmy alone: ~$170 + Usage]

New total monthly server cost out of pocket: ~$330

My current monthly bill is already more than that from previous infrastructure @ $336.

Going forward

Going forward I plan to monitor performance and try to balance the benefits of a snappy instance with the cost it takes to get there. I am fully invested in growing this community. I plan to continue to financially contribute and have zero expectations to have everything covered; however, community interest is very important. I'm not going to overspend for a very small set of users.

If the growth of the instance continues or rapidly changes I'll start to scale back up.

I'm learning how to run a Lemmy server. I'll adjust to keep it going.

Here are my current priorities for this instance:

  1. Security
    • This has to be number one for every instance. Where you decide to store your data is your choice again. You must be able to trust that your data is safe and bad actors cannot get it.
  2. Resilience & backups
    • Like before, it's your data and I'm keeping it useable for you. I plan to keep it that way by providing disaster recovery steps and tools.
  3. Performance
    • Performance is important to me mostly because it helps ensure trust. A site that responds well mans the admin cares.
  4. Features
    • Lemmy is still very new and needs a lot of help. I plan to contribute to the core of Lemmy along with creating 3rd party tools to help grow the community. I've already began working on https://socialcare.dev. I hope to help supplement some missing core features with this tool and allow others to gain from it in the process.
  5. User engagement
    • User engagement would be #1; however, everything before this is what makes user engagement possible. People must be using this site for it to matter and for me to justify cost and time.

Conclusion

If you notice a huge drop in performance or more issues than normal please let me know ASAP. I'd rather spend a bit more for a better experience.

Thanks, Jason

35 comments
  • Oof ye that was incredibly optimistic. Even the lower infra is more expensive than what I pay for lemmy.dbzer0.com.

    To get this kind of traffic one needs to be doing similar approach to lemmy.world with likewise promotion from the larger lemmy community, but even than has slowed down. If there were already large reddit communities that they managed, it could help get more people to their instance, but without it, it's much better to scale upwards as demand appears.

    • A few Subreddits were planning to come over at the end of the month that didn't work out. Their members revolted and threatened to replace the mods. So they stayed over there. It would have been over 200k people if they all came over, even if not all members came. I thought I was under planning at the time.

      I was reaching out to Reddit mods, trying to convenience them to join my instance. It almost worked, haha.

      But in the end, I had to scale down while still maintaining something snappy. The DB is already over 15G, and I want to use a managed db. It's too large to put on smaller instances.

      • Ye that's why I said you need to be the owner of the sub. The piracy move worked because I was the top mod and trusted for a while. It didn't work in stable diffusion because they didn't take it as seriously as I did and I couldn't act the same way. Eventually the flood gates will open, but when exactly that will be, nobody knows

  • That was very optimistic, but yeah I am probably also going to down-scale a bit for the time being. Together with the performance improvements in Lemmy it shouldn't make much of a difference though and I am redesigning my setup a bit to allow quicker scale up.

    • I was expecting over 200k people to going overnight from Reddit. There were a few communities actively working to come over. In the end the followers revolted against a Rexxit. They didn’t come.

      • Not overly surprised to be honest, but I think one crucial mistake was to paint it as a moderator strike, which was actively propagated by Reddit management once they realized that this back-fires.

        I have noticed an anti-mod sentiment from users coming over from Reddit before, and when asked why, you quickly realize that they have no idea what mods actually do and are just annoyed by some overly active moderation bot or some personal pet-peeve being moderated as spam. Typical case of a thank-less job that people only notice when you stop doing it.

        I think it was mostly a small minority of moderators and 3rd party app power users that left, and while this will not have an immediate effect on Reddit, it will probably initiate a slow death spiral of worse and worse sub-reddit content.

      • @jgrim @poVoq

        Yeah, even though twexxit happened, there are still an absolutely massive amount of holdouts, so I expect rexxit to be just as slow.

    • What does yours look like now?

      • I am currently on 8 core, 32gb ram. But that is < 20% utilized. I am planning to move over to a 4 core, 8gb machine but optimize that as a dedicated database machine in case scale-up is needed again.

35 comments