• BellyPurpledGerbil@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    ·
    7 months ago

    I don’t disagree with there being tradeoffs in terms of speed, like function vs network requests. But eventually your whole monolith gets so fuckin damn big that everything else slows down.

    The whole stack sits in a huge expensive VM, attached to maybe 3 or 4 large database instances, and dev changes take forever to merge in or back out.

    Every time a dev wants to locally test their build, they type a command and have to wait for 15-30 minutes. Then troubleshoot any conflicts. Then run over 1000 unit tests. Then check that they didn’t break coverage requirements. Then make a PR. Which triggers the whole damn process all over again except it has to redownload the docker images, reinstall dependencies, rerun 1000+ unit tests, run 1000+ integration tests, rebuild the frontend, which has to happen before running end to end UI tests, pray nothing breaks, merge to main, do it ALL OVER AGAIN FOR THE STAGING ENVIRONMENT, QA has to plan for and execute hundreds of manual tests, and we’re not even at prod yet. The whole way begging for approvals from whoever gets impacted by anything from a one line code change to thousands.

    When this process gets so large that any change takes hours to days, no matter how small the change is, then you’re fucked. Because unfucking this once it gets too big becomes such a monstrous effort that it’s equivalent to rebuilding the whole thing from scratch.

    I’ve done this song and dance so many times. If you want your shit to be speedy on request, great, just expect literally everything else to drag down. When companies were still releasing software like once a quarter this made sense. It doesn’t anymore.

    • 1984@lemmy.today
      link
      fedilink
      arrow-up
      2
      ·
      6 months ago

      I agree with you, and that is a hellish environment to work in.

      There must be a better middle ground for all of this.