I returned to AWS, and was reminded why I left

(fourlightyears.blogspot.com)

171 points | by andrewstuart 1 day ago

30 comments

  • aljgz 51 minutes ago
    Years ago, I joined a company, took over a dev team and was asked to launch the product in 3 months.

    They were using AWS, so I logged in the account to add a few more machines. Right there, in front of my eyes, were the signs of an adversarial, abusive relationship.

    The UI to fire up a new machine did not show me the price. I had to look up the price in another table that did not have the specs.

    I had to have the two tables open, cross check the specs and price.

    If I had learned one thing from my past life was that if you see the signs of an abusive relationship, you have the option to walk out, and you don't, all that follows is your own fault.

    Created a DigitalOcean account, moved everything over. Set up our CI/CDs to deploy there, and spent the next two months on the product, launching one month earlier than promised.

    Some years before that I saw a video online where a person digs a hole near a river and puts a pipe connecting the river and the hole. The fishes push themselves hard in the pipe to get to their trap. Choosing the path of least resistance, and never backing off from a mistake: recipes to end up like those fishes. The video left a big impression on me.

    • Lucasoato 24 minutes ago
      > They were using AWS, so I logged in the account to add a few more machines. Right there, in front of my eyes, were the signs of an adversarial, abusive relationship.

      > The UI to fire up a new machine did not show me the price. I had to look up the price in another table that did not have the specs.

      I don’t want to be the one defending AWS, but I don’t think that this is a valid reason not to like them. I mean, pricing depends on so many factors like reserved/dedicated/spot/on-demand instances have all different prices.

      I don’t even think that using the UI to spin up the machine is the right way to do that in an enterprise setting, you should always do that through Infrastructure as Code, to know exactly what you have up and running, just by looking at that as you would with any program. I’d suggest to use the UI for simple testing, for which the costs are often (but not always) negligible.

      Jeff Bezos if you see this please send me some cash.

      • whateverboat 8 minutes ago
        I must disagree so heavily with you here. Prices can depend on so many factors, but.... when that particular account is choosing that particular machine, AWS knows what it will cost, and they can show it to them dynamically. It's very difficult to be convinced in this day and age that you cannot have a dynamic price chart right beside the machine sellector which is showing or calculating prices in real time for that particular product.

        About using IaaC to set-up the infrastructure, sure, but sometimes you just need to browse stuff before actually writing code to get a feel.

      • bulletsvshumans 18 minutes ago
        They absolutely could calculate and put the price in the UI if they wished to. Other cloud vendors do.
      • finaard 17 minutes ago
        I've been in a similar situation - a surprising amount of companies really just click to create instances. Last time I've encountered that at a customer I improved things a bit by creating templates, and scripting instance creation based on those templates - but ideally we'd have had the templates themselves as well as the network side generated by ansible.

        But that's the problem: The complexity of doing that properly is pretty much the same as just doing your own hardware (which is what I'm working with most of the time - handling stuff on physical servers). And at that point the question should be why you're paying AWS so much money and pay your people to automate AWS workflows when you could just pay them to automate workflows on physical hardware, which would be way cheaper to run than the AWS instances.

      • lr1970 8 minutes ago
        > pricing depends on so many factors like reserved/dedicated/spot/on-demand instances have all different prices.

        Or you can have your own negotiated private pricing which is a whole different story in itself.

      • richwater 17 minutes ago
        The faster people realize AWS hates the need for a UI, the better.

        It should really be a read-only layer for metadata and logs.

    • chuckadams 20 minutes ago
      AWS actually has a pretty good price calculator with some decent presets (but FFS, can I have an "uncheck all" button?) but of course it's an entirely separate app. Amazon naturally wants some friction to having this pricing information handy, though I suspect the main reason has to do with Conway's Law: AWS still ships their org chart.
    • zsoltkacsandi 17 minutes ago
      I agree with you at some degree, but I would like to point out that AWS pricing is much more complicated that you can calculate how much will you pay from a static number showing up on the UI.

      If it bothers you that you need to open two tabs for cross-checking the costs, you may want to avoid every cloud provider, not just AWS.

      Once you have NAT gateways, CloudFront, S3, auto scaling, loadbalancers, etc, calculating the cost becomes an art rather than an exact science. And if you don't use these, there is no point of using AWS, there are plenty of "cheap" VPS providers.

      • PLenz 13 minutes ago
        If they can charge me for it then they can calculate it and show it to me. Anything else is obfuscation.
        • zsoltkacsandi 10 minutes ago
          They don't know in advance how much bandwidth will you use, how much traffic will you have, what auto-scaling rule will it trigger, etc. It's not obfuscation, it's billing based on your usage. And as with everything in life, there are tradeoffs.
  • tedivm 1 hour ago
    > AWS stomped on open source projects - despite the clear desire of projects like Elasticsearch, Redis, and MongoDB not to be cloned and monetized, AWS pushed ahead with OpenSearch, Valkey, and DocumentDB anyway, capturing the hosted-service money after those communities and companies had built the markets; the result was a wave of defensive licenses like SSPL, Elastic License, RSAL, and other source-available models designed less to stop ordinary users than to stop AWS from stripping open-source infrastructure for parts, owning the customer relationship.

    This is completely backwards, at least with OpenSearch and Valkey. AWS didn't create the forks until after the upstream projects changed their license, so it's really weird to say that the forks "resulted" in the license changes when those forks where a response to the license changes. With Valkey in particular it was members of the former redis core development team that created Valkey.

    • hankerapp 1 hour ago
      A lot of these projects work on a business model where they open-source their core product, and provide advanced services, installation, maintenance or fully-managed services around their product. AWS was bypassing them by providing fully-managed services. On this, I am on the side of the people behind the projects. Basically AWS was eating their lunch. They had no choice but to change the licenses.
      • skywhopper 22 minutes ago
        Just because they picked a bad business model doesn’t mean they deserve to avoid competition. Don’t give away your source code if you don’t want someone else to provide hosting.
      • rpdillon 49 minutes ago
        They have a problem with their business model, then. License changes to a formerly open source project are costly. The community reacts very strongly when license terms change after they've come to depend on a product, and they should.

        Why do we apply this standard to MongoDB but not to Apache, Linux, Postgres, or MariaDB? One purpose of an open source license is to allow many providers to provide the service. As I've talked about here previously, Elasticsearch wasn't able to provide the service I needed, so I had to move to AWS.

        It's weird to me that the Hacker News community doesn't think that sort of competition is good. The narrative seems to be that all these businesses are somehow victims of AWS, when it seems the truth is much more straightforward: they provided open source software and people used it. The fact that their business had no working plan to actually monetize that foundation should not be taken out on the community.

        • ipaddr 39 minutes ago
          Competition would mean Amazon creating their own software. Taking software others made and using your monopoly eco-system and scale to drive the original creator out of the game kills the product.

          Many support breaking up Amazon so others could compete not killing small entities and growing Amazon.

          • skinfaxi 9 minutes ago
            > Taking software others made and using your monopoly eco-system and scale to drive the original creator out of the game kills the product

            They took software that others gave away for free without restriction and did what they wanted with it. It took time but the community figured out this exploit path and patched it in subsequent license versions.

          • rpdillon 34 minutes ago
            It's not just Amazon, it's also smaller providers like Dreamhost, which I've been using for 20 years. I feel like people are in favor of killing the hosting ecosystem so that we can support businesses that didn't have a working plan to monetize their open source offering.
        • cyanydeez 37 minutes ago
          Walmart pulling up top a small town, opening a single business, paying everyone minimum wage is not 'competition is good'.

          Just try a little bit of understanding.

          • rpdillon 16 minutes ago
            This feels close to "felony contempt of business model".

            https://www.eff.org/deeplinks/2019/06/felony-contempt-busine...

            We are supportive of 3rd party ink cartridges, and there's little concern for the business model of the printer manufacturers. We instead care about the rights of the folks using the printers.

            With Postgres, no one bats an eye that there are thousands of hosting companies providing Postgres as an offering, and they give nothing back to the project. Same with Apache, Nextcloud, Linux, Nginx, Sqlite, and thousands of other pieces of open-source software. Are folks against hosting companies like https://yunohost.org/?

            It's only when (1) the software is open-source, and (2) the entity behind it doesn't know how to sustain itself with open-source, that we suddenly change positions and view the project as a victim. This doesn't happen with printers, it doesn't happen with other open source software. I'm not even against a change in the license, but claiming that AWS is evil for doing this doesn't track.

          • tonyedgecombe 20 minutes ago
            Maybe it is for the consumer. When Aldi opened in my nearest town my food bill dropped by 20%.
          • surajrmal 18 minutes ago
            Arguably the town is at fault for choosing to permit Walmart to open in their town in that analogy. If you want to control the negative externalities of capitalism you can't just expect to not provide regulations and hope things will work out.

            Even if it weren't AWS, someone else with enough determination could use the same open source code to create a compelling alternative taking away business from the original authors. Trying to use social norms to make people not do that is not effective. You need mechanisms that can be enforced via legal procedures to be effective.

    • ceejayoz 1 hour ago
      > it's really weird to say that the forks "resulted" in the license changes when those forks where a response to the license changes

      But those license changes were a response to how AWS was monetizing their work in ways unsustainable for the upstream projects.

      • embedding-shape 35 minutes ago
        > But those license changes were a response to how AWS was monetizing their work in ways unsustainable for the upstream projects

        Or seen from the other side, these projects chose initial licenses that didn't fit with their wants for how others should use their project, in this mind.

        If you use a license that gives people the freedom to host your project as a service and make money that way, without paying you, and your goal was to make money that specific way, it kind of feels like you chose the wrong license here.

        What was unsustainable (considering this perspective) was less that outside actors did what they were allowed to do, and more that they chose a license that was incompatible with their actual goals.

        • ceejayoz 32 minutes ago
          The situation changed. A license that's the right choice at one point may not be the right license a decade later.
          • ncruces 15 minutes ago
            That's fair, but forking the FOSS version is also an adequate response.
          • embedding-shape 31 minutes ago
            Agree, as long as existing contributors agree the license should be changed, projects should feel free to do so, no harm, no foul.
        • tonyedgecombe 18 minutes ago
          I’m not sure any open source license is going to help when you can ask Claude to clone an application in the language of your choice.
      • jgalt212 1 hour ago
        Yes, this was my impression as well.
    • 2ndorderthought 30 minutes ago
      Sometimes I wonder how much it would hurt Amazon to pay the creators and maintainers of OSS software they sell 1 cent per billing period of use(1 hr?). I also wonder how much money that would offer an oss team. To contribute risk free to improving the product
      • richwater 15 minutes ago
        I think you would be surprised how many commits in OSS comes from paid workers of the various cloud companies and tech companies out there.
    • stavros 1 hour ago
      Of course AWS didn't create the forks until the projects changed their license to disallow AWS from making money from their code! That's the whole point here.
      • jasonlotito 12 minutes ago
        When they changed their license, they were no longer open source. They could have chosen open source licenses such as the AGPL, but they did not. They were a non-open source company at that point, and AWS was putting out a product build on open source. Simple as that.

        Redis was not an open source company when AWS moved to Valkey.

        Companies are free to license under the AGPL if they want. Or other open source licenses.

        Sorry, but non-open source companies aren't getting sympathy from me because they are hating on open source projects.

        • stavros 10 minutes ago
          These were open source projects that had to change licenses away from open source because of AWS. I'm not sure how the OSS companies are the bad guy here.
  • djyde 1 hour ago
    I've transitioned between cloud services and self-hosting a few times:

    1. Vercel Phase My first project used Vercel. Since my project was Next.js, the experience was decent. But as my project gained some users, I found that even for projects under 100 users, I needed to pay $20 per month. Since my service didn't require high performance, this cost felt steep.

    2. Self-host Phase (Hetzner + Coolify) Later, I started setting up my own server with Hetzner and deploying with Coolify. Since Coolify is open-source and free, I only had to cover the cost of a VPS (even $5 a month was sufficient). I could deploy PostgreSQL instances and run a web server on it. But later I discovered that even this way, I still had to spend a lot of effort maintaining PostgreSQL and Redis. Even though they were containerized with Docker, managing them was still troublesome. I needed to pass various system and environment variables between services, which was very tedious.

    3. Cloudflare Phase So later I switched to Cloudflare. With Cloudflare Workers, I can deploy fullstack applications and use D1 Database and Cloudflare KV to replace Redis. These features can be called directly within the Worker without needing to pass environment variables.

    Plus, the local development experience is excellent and the pricing is very reasonable, so I've been using Cloudflare's entire suite ever since.

  • sudosteph 6 minutes ago
    I'm surprised by the author's hate towards DynamoDB. It's probably one of my favorite AWS Services. Great availability and no operational overhead. Cost was pretty minimal too each time I've used it, but you do need to spend some time architecting your data model up front, and that requires reading service docs and understanding it.
  • djinn 27 minutes ago
    AWS has been systematically hollowed out of technical staff since 2023. Either through mass layoffs or via 2 cycles of performance improvement plans. Often I find most skilled peers in presales or support are not with AWS whilst the ones with most ambiguous work history have been retained at promoted.

    Use AWS at your own risk, Paul Vixie is not there to save you.

  • jfengel 1 hour ago
    I don't work in that area, so I only touch AWS once in a while for personal fun projects.

    And every time it's a nightmare. I'm just banging out a server for my experimental card game, not setting up an new financial institution. Everything looks as if I'm preparing to scale to infinity tomorrow, with a staff of a thousand and a budget backed by VCs.

    Fortunately there's Netlify and similar, who put a gloss on it so that I don't have to boil the ocean. I figure that one of these days I might actually be forced to learn IAM and VPNs and God only knows what else. Meantime, every time I touch it my eyes bug out.

    • chuckadams 1 hour ago
      You can just spin up a raw VPS on EC2 or Lightsail, give it a public IP, and call it a day. You aren't required to implement every enterprise pattern in the book.
      • embedding-shape 33 minutes ago
        If there is any single service I'd avoid on AWS it's Lightsail, it'll cost you a lot more than almost anything out there, is slow as molasses (even tiny services can need tens of minutes to deploy) and you'll experience random failures not even AWS reps can explain to you. Avoid at all costs.

        It's a ghost of its former self, but I'd probably still rather use Heroku today than being forced to use Lightsail even once again.

        • chuckadams 31 minutes ago
          I sure prefer plain EC2 to Lightsail as well, and prefer Hetzner over either, but looking at these replies ... can someone tell me where the goal posts are right now?
      • themgt 52 minutes ago
        Congrats, your raw EC2-hosted 500MB WebGL experimental card game went to the HN Front Page! You now owe AWS $30k in egress costs.
      • DaanDL 1 hour ago
        But that's costly. Speaking of my own experience: going from a webapp fully hosted on an EC2 instance to a railway and vercel setup reduced my costs 10x.
        • liveoneggs 1 hour ago
          t4g.nano is $3/m; a similar spec-ed fargate on ecs (just any docker container) is $10/m
        • chuckadams 1 hour ago
          Maybe so, but it's still not the complexity nightmare that some would have us believe it is.
    • benoau 1 hour ago
      What amazes me is how Heroku absolutely nailed what most web apps need nearly 20 years ago.
      • ChrisBland 1 hour ago
        I miss heroku dearly. somewhere at Salesforce there is an exec who killed the product and shifted it to enterprise and is now looking at the vibe coding revolution seeing their opportunity missed.
        • christophilus 1 hour ago
          Render has been excellent replacement, in my experience.
        • iamflimflam1 1 hour ago
          I suspect the people responsible have fully justified to themselves any decisions they made, helped along with any bonuses they got for doing it.
        • the__alchemist 1 hour ago
          Why? It is still up, and working just as it used to.
        • cpursley 1 hour ago
          Fly and Render are what heroku would be if they didn’t stop innovating. And neon db for Postgres.
          • trashburger 48 minutes ago
            > And neon db for Postgres.

            For 90% of the time when they're up.

        • maccard 1 hour ago
          Digital ocean is the answer. You give it a container and off you go.
          • ipaddr 36 minutes ago
            Use to be now they are requiring 2fa for addon domains over a certain amount
            • ceejayoz 29 minutes ago
              Of all the things to be upset about, mandatory 2FA doesn't seem like one.
            • maccard 28 minutes ago
              It’s negligent to not use 2FA for any cloud platform where credentials can be used to spin up resources.
    • KptMarchewa 52 minutes ago
      it's only a nightmare if you had not to deal with Azure
    • djyde 1 hour ago
      I switched to Cloudflare and it's been a breath of fresh air - everything I need and the pricing is reasonable.
    • MagicMoonlight 34 minutes ago
      AWS is aimed at enterprise, not personal projects. Personal projects wouldn’t give them any meaningful revenue because the only thing that matters is cost.
  • rglover 14 minutes ago
    You can accomplish a lot by just having a basic knowledge of Linux sysadmin. I was clueless and then learned some systemd-and-curl-fu. Will never forget the "holy sh*t, this is deceptively simple" moment. A bit more research and I found that beyond convenience and specialty APIs, you really just don't need a lot of this stuff to run a healthy system (since reducing absolute cloud dependence, my reliability has gone through the roof).
    • sandruso 4 minutes ago
      100%. I'm not really sure why we all agreed that deployment is somehow the hardest thing that you need to outsource when setting the linux server is one the richest experience you can get and it will pay dividents forever.
  • rembal 59 minutes ago
    +1 on the IAM over engineering, though to AWS credit, I suspect it was evolved rather than design, and that's what you get when evolution has to maintain some level of backward compatibility (think humans still having to be able to lay eggs). Another thing that happens occasionally for saas companies is AWS creating a copy of their product in a bit sus way - but it's not a technical problem, it's a business model problem.
  • xmcp123 35 minutes ago
    Something that has always bothered me an outsized amount is Elasticache.

    I will bite the bullet and pay for RDS because it adds a lot of value - scalability, a reasonably optimized config, backups I don’t have to worry about.

    But Elasticache is exploitatively priced with almost no value add.

    It is slower, less optimized, less stable, and only supports one DB compared to a vanilla redis install with zero configuration.

    There are some scalability improvements, but it’s extremely rare they’re even required because vanilla redis so wildly outperforms elasticache on a similar instance.

  • finaard 22 minutes ago
    > My business email system still does not work.

    This is always the weird things in those rants. He's complaining that after 4 days his mails are offline.

    Now I'm doing a mix of physical servers in rented rackspace, and rented servers - but even there I can have billing mixups where they deactivate servers for no good reason. And to get email working again the limiting factor would be the DNS TTL - new servers would be online somewhere else within hours of it going down. (And yes, I tested that just last year - one hoster threatened cutoff due to non-payment on a paid invoice, which prompted me to move the mail server just in case while getting this resolved).

    • somewhatgoated 19 minutes ago
      I don’t get your point, what is the weird thing?

      That he is complaining about his email being down or that he trusted AWS at all with email?

      • rkent 11 minutes ago
        The only way that email is down for days for a competent sysadmin is if their DNS is also with AWS, so I assumed that was the case. Assuming that is true, what is weird to me is that, after deciding he hated AWS and left it, that he still kept his business DNS (the most important service there is) with AWS.
    • panny 16 minutes ago
      >new servers would be online somewhere else within hours of it going down

      Yeah, no that's not how it works with email. You have to build reputation for weeks or receivers throttle you.

  • h1fra 20 minutes ago
    To this day I still don't understand why people love AWS. It's overly complex, full of dark patterns, and not even that good compared to alternatives.
  • dzonga 40 minutes ago
    the A.I (LLM) merchants will tell you - that AI is now writing software (agentic coding they call it ) - yet one they can't bill you properly or have a broken billing mechanism.

    their dashboards are trash & don't work - Google Cloud, AWS Console, Google Ads, Meta Ad manager

    I won't even mention the hyped up LLM vendors.

    but here we r - people being laid off due to A.I - money being funneled into Gigawatt datacenters

    • mcherm 26 minutes ago
      I don't think that's the real issue. The problems with billing and dashboards at cloud vendors are not new within the past few years, they have existed far longer than the LLM coding.
    • owebmaster 11 minutes ago
      The billing "problems" these companies have are working fine for them as they are there to increase revenue, not to improve user experience.
  • eluded7 20 minutes ago
    I'd tend to agree with the author. If forced to choose a cloud platform though (and that often is the case) then AWS is probably the best of the bunch in terms of reliability. Have heard and experienced some real horror stories with Azure & GCP by comparison.
  • morpheuskafka 1 hour ago
    > I am reminded why I left AWS and how I need to finish the job, get off AWS Workmail, move my domains from Route53 and never return.

    Well, besides for the fact that the author's got suspended for no reason, WorkMail is being shut down March 2027 anyway. I recommend checking out Purelymail for a budget, batteries included option. Another option is to run your own server but have it use something like AWS SES to send externally, avoiding the IP reputation issue.

  • alde 27 minutes ago
    The set of core services on AWS remains amazing: EC2, S3, IAM, EKS, Route53, RDS etc.

    AWS IAM is extremely well designed when you compare it with the spaghetti monster IAM systems of other clouds.

    Every time I try the new cool thing supposed to replace these services on some other provider - I understand how mature and polished the AWS ones are.

    With that said, the rest 90% of AWS services like WorkMail, Cognito, API Gateway, are absolute hot garbage which no good meaning AWS expert will touch with a 10 meter stick.

  • thegrim33 18 minutes ago
    Here's a fun game to play. Every time you see a negative story on the front page about some US company or technology (be it Amazon, Google, OpenAI, you name it) go look at the submitter's info and see if they're European (or in this case, Australian, same difference). You'll find that in 99.99% of cases they are. Isn't that funny? Isn't that an interesting coincidence? How might you explain that?
    • owebmaster 17 minutes ago
      What's your theory?

      US Americans don't like to complain? They are moderated? They prefer to pay an extra to help the cause?

  • geoffbp 1 hour ago
    Slightly different but related topic - for people who work with people vibe coding, what is the easiest way to allow that for non tech users (and reducing risk)? AWS or something like vercel? Coolify?
    • sudosteph 11 minutes ago
      I'm old and bitter about this, but you're not reducing risk by going with PaaS, you're just outsourcing it. That recent "My AI Agent deleted my prod DB" story was only possible because the PaaS they were using allowed for 1-click permanent delete. At least AWS has a "prevent accidental termination" checkbox.

      Nobody wants to hear this, but as things stand, there's no escaping risk for vibe coders right now. Personally, I think AWS is still a good choice for the long run, but don't make the mistake of thinking current LLMs will actually be able to manage the environment on par with a decent infra engineer. That's one of their weaker areas right now. Good news is there are million managed service providers and AWS-competent humans still in existence. Also Premium Support is a good resource.

      Whatever you do, make a lot of backups and store them on a different service somewhere. Then if you get to a situation where you need to do something with sensitive data, or need to raise money, engage with someone who can do a proper review.

    • _puk 31 minutes ago
      Vercel and supabase seems to be the norm around here.

      DX is simple, integrations between the two, and the stack is well understood by the LLM.

      Lovable uses supabase, and is surprisingly easy to eject from too; I've done the lovable to Vercel + supabase a couple of times, even managing to keep it syncing via the Git integration. You can get proper scalable infra and minimal vendor lock in whilst the vibe coder gets to play with the pretty.

  • sbinnee 23 minutes ago
    I also tried. Only service I use is s3 for personal backup. I pay around 15 cents per month.
  • dangoodmanUT 35 minutes ago
    GCP would be perfect if they didn't have a history of randomly dropping quotas on startups, causing them downtime
    • squirrellous 19 minutes ago
      What do you find appealing about GCP? I occasionally hear positive sentiment like this but don’t entirely understand the reason, mostly because I haven’t used non-GCP clouds professionally. Is it just the least bad of all the big clouds?
  • joefourier 1 hour ago
    > Cloud computing was an absolutely mind blowing revolution - suddenly your startup could run its own computer systems in minutes without need to install and run your own systems in a data center. This was an absolute game changer, and I really drank the AWS Kool Aid down to every last drop then I licked out the cup. I was all in on AWS in a big way.

    Am I the only one who remembers that VPSes and dedicated hosting services were a thing before AWS came around? Yes you had to pay for a month at a time and scaling wasn’t as instant, but it wasn’t like the only option before cloud computing was having to drive to the datacentre and install your own server.

    • tiffanyh 1 hour ago
      > suddenly your startup could run its own computer systems in minutes without need to install and run your own systems in a data center.

      The “in minutes” is doing a lot of the work in that sentence above.

      I also used dedicated servers in the late ’90s (and they still offer great value today). But before AWS, provisioning new hardware typically took days, not minutes.

      AWS changed that, and the rest of the industry eventually followed.

      • reliablereason 28 minutes ago
        No you could rent virtualised servers way before AWS. AWS simply had good marketing.

        The virtualised server thing was not a AWS thing, the thing that was were their other services. For example instead of renting a virtual server and installing a database on it. You could rent the database; that was sort of a new thing that AWS made in to thing.

        It was never cheaper what you paid for was a promise of fire and forget. You would no longer need to worry about any responsibility to update the server or the database cause the AWS crew took care of that.

      • joefourier 49 minutes ago
        > I also used dedicated servers in the late ’90s (and they still offer great value today). But before AWS, provisioning new hardware typically took days, not minutes.

        VPSes and non-custom configs for dedicated servers were pretty instant as far as I know, I think the advantage of AWS was more that you could scale up and down much more easily since you weren’t locked down in a monthly contract, and that you could automate server provisioning through an API.

    • _puk 21 minutes ago
      If you recall AWS didn't scale instantly originally either.

      We had super bursty traffic, and had to go with Google Cloud (very early days! [0]) because you'd need to communicate with AWS and pre-warm the ELB capacity of your expected bursts.

      We did a dead launch to 60 million customers (0 to 60 million, no organic growth phase) this way. I wouldn't want to do that on a VPS.

      [0] https://cloudplatform.googleblog.com/2013/11/?m=1

    • flomo 3 minutes ago
      Am I the only one who remembers how shady a lot of those VPS/hosting companies were? Seemed to be a race to the bottom, so a 'good' outfit might suck or completely disappear a couple years later. (Also, pricing was all over the map, I had a client who was paying $150/mo for a VPS.) Hetzner survived, but for a long time they had a reputation as spamfarm. So I get the initial appeal of AWS, used tactically. But for larger companies, its something like IBM or Oracle, if you are price-sensitive, it's not for you.
    • rglover 36 minutes ago
      Not first, but it was the first with a planet-scale marketing budget.

      I miss the Media Temple days.

  • andai 1 hour ago
    At last my quest to find the stooge has come to a bitter end!

    I saw some 192 core instances on Vultr, but I haven't tried them yet. What are you doing with all them cores?

    I often fantasized about spinning up hundreds of nodes for various projects that needed number crunching. Then realized "wait I can just rent one big box for an hour" haha. It's really cool that we can do that now.

    • andrewstuart 1 hour ago
      >> 192 cores What are you doing with all them cores?

      The ancient forgotten art of Vertical Scaling.

      • rglover 39 minutes ago
        It's remarkably zen and effective.
  • faangguyindia 1 hour ago
    Why do people even bother with cloud?

    I’ve a couple of apps doing a few million a day. I am using Hetzner and before that used DigitalOcean. Mind you, for close to a decade.

    People are unnecessarily complicating stuff, and these clouds can go very expensive very quickly.

    Recently, I came across a company and they were spending $20k a month on GCP. I am like, are you kidding me, $20K for the kind of stuff you do??? It seems you do not understand how CPU, RAM and Disk work to plaster such "autoscaling hyper solutions" burning money in cloud.

    I moved their stuff out of the GCP managed solution and ended up with a $200-400 per month bill. The CEO can still not believe how it's even possible.

    I suggested them move to Dedicated servers but they didn't want it, they said they must show they are on Hyperscaling cloud.

    OK fine, we'll stay in Hyperscaler but not use any of their service other than VMs.

    They racked up a ton of bills by using cloud monitoring, Datastore, and autoscalers (with no proper tuning), Kubernetes.

    I replaced all of it with Prometheus, Grafana, Loki, and most stuff from Datastore to Postgres and Mongo with replicas. I added Redis.

    I implemented a custom scaler where you can scale off of app metrics, not by just using a random peg on CPU.

    I implement hot data reload by packing the data updates in gzip file, uploading to GCS and pulling from autoscaled units. Moved the stuff to Spot VMs.

    The complexity of stuff in cloud is high for nothing.

    • maccard 59 minutes ago
      At my previous startup: because AWS gave us a bunch of credits and helped us design the infra. It meant we ran for free what they designed for free.

      At a previous bigger company, getting procurement to sign up to a new provider requires writing a business case, justifying the spend and then getting multiple competing quotes and speaking to their sales teams. Signing up to a new service takes _months_ even for $10/mo as they’ll negotiate for bulk discounts and the best possible terms for something that will literally cost less per year than one of meetings they hold to discuss the “value”. Meanwhile on AWS I can click a button in the marketplace and it gets thrown in the AWS account which is pre approved spending.

      • hibikir 15 minutes ago
        Many a big company migrated because they have those very same slow procurement problems with internal data centers. I saw multiple cloud migrations because internal friction was at a level that the price didn't matter: 6 months for the smallest VM kind of thing. Very adversarial relationships, often with very poor incentives, as the service setup costs for other business units were way inflated, but then the maintenance costs didn't pay enough. Paying 3x-4x more a year for just a semblance of reliability was seen as a big plus.
      • misir 28 minutes ago
        At my current team at a “bigcorp” I have noticed a similar pattern. We use aws not because it’s efficient in any way.

        We use it because we don’t want to deal with slow procurement process. It kills all the momentum.

      • xmcp123 40 minutes ago
        Have seen this repeatedly also.

        Watched one company end up with a $250k AWS bill when their credits expired (which they could not pay).

        • maccard 12 minutes ago
          If you let it go that far then you were going to blow it one way or another - it’s not an excuse to totally ignore the cloud spend but it’s a n excuse to defer it to a later date. If your successful, fix it, if your not then AWS aren’t getting paid anyway!
    • goosejuice 19 minutes ago
      > spending $20k a month on GCP

      > burning money in cloud

      I suspect there's two reasons why this happens.

      One is just the disassociation with opex that seems ever present in the VC model. The other is that many startups settle in on a ops solution before hiring ops and the cost of switching isn't that attractive until they're faced with a dwindling runway and a down round.

    • edg5000 1 hour ago
      I think AWS is liked is because when AWS started, being able to get a new VPS up in minutes was still quite unusual. Many hosts would require about 24hr, I suspect, for getting a new VM up. At least those are some experiences I had. But nowaways, they are probably many options for getting a VM instantly.

      I agree that it's overcomplicated. Although having the self-service portal also for assigning IPs is useful. But most of it seems overkill. Although, being able to detach storage from VMs and such is also quite flexible. But still.

      • maccard 55 minutes ago
        It’s flexible but slow. we ran our C++ CI/CD on AWS at a previous company, and we used spot instances with volumes attached and detached dynamically. The performance was absolutely abysmal because in effect you’re running compilation across a networked file system, no matter what AWS says your throughput is.

        Our 64 core spot instances on windows were taking 8-10x longer than our developer machines with the same core count, and there was a bunch of engineering went into the scaling, queue management, etc. if we’d just had a single bare metal machine from hetzner we could have saved money _and_ reduced our iteration times.

    • kriz9 44 minutes ago
      The ease of getting things set up quickly and usually for free when starting up is very tempting. Later, migration is usually considered risky and not worth it because of maintenance overhead - which I would argue has become very easy.
    • andrewstuart 1 hour ago
      I worked for a startup company - the founders were really nice people and had put their own money in - quite a lot of money - to get the software built for the vision they had.

      By the time I joined, 18 months after development had started, a giant, complex, hideously tentacled software beast had been built that used every possible AWS service that the massive offshore team of developers could find to use.

      It should have been built on a single Linux box by a single senior developer with Python and Postgres or nodejs or Ruby or whatever.

      They went out of business after not too long and I couldn't help wondering if things might have been different if they hadn't spent a fortune building a giant money making machine for AWS, instead of making a web application on a Linux box.

      Every AWS project I have worked on has had some significant work put into programming AWS instead of writing business functionality.

      • cube00 1 hour ago
        > hideously tentacled software beast had been built that used every possible AWS service that the massive offshore team of developers could find to use

        To be fair, if they had a AWS Solution Architect involved they heavily push you down this road and if they manage to get in management's ear they'll push the idea that server-less AWS features is vastly cheaper.

        If you're only responding to a handful of requests that's true, but once things ramp up you get "nickel and dimed" for everything: API Gateway requests, lambda execution time, DynamoDB read/write units, CloudWatch logs, outgoing data, step function transitions, S3 requests.

        I understand all those services cost money and they shouldn't be free, but I question if paying all those micro-transactions is worse then paying for your own VMs, especially once your customers complain about the cold starts and you think you can fix it with "lambda warming"

        • maccard 53 minutes ago
          To be fair that’s an AWS problem not a lambda problem. If you replace lambda with EC2 the only thing you save in is lambda and step functions(and maybe api gateway but now you need to pay for a load balancer or a public IP), the rest you need to pay for anyway.
    • MagicMoonlight 25 minutes ago
      This isn’t a like for like comparison though, is it.

      You removed all of their logging and all of their redundancy and reliability and replaced it with shitters that will all explode if the small providers one data centre goes down.

      And if someone penetrates this mega server, they’ll be able to wipe all your logs or tamper with them, to hide the attack.

      If your storage servers go down, everything they have is gone. And these providers don’t offer the finest hardware. How do you know all of those drives aren’t from the same batch? They will be, because they’re a bulk buyer with a single data centre.

  • cynicalsecurity 29 minutes ago
    Preach, brother.
  • znpy 1 hour ago
    > Of course I do not pay for premium support, so I have to wait the 24 hours that they said it would take them to reply. It's 3 days and AWS support has not replied.

    The writing has been on the wall for a few years now, and this is particularly evident to those thar have worked at AWS: Amazon is in its day-2 era.

    Amazon being in its day-2 era means that most of what has been written in the past twenty years about Amazon is bot valid anymore.

    “Customer obsession” is literally their first leadership principle, and stellar support was their defining characteristic.

  • cmiles8 1 hour ago
    There was a time when AWS was truly innovative, but it’s long since transformed into Amazon’s cash cow and is behaving like such.

    Innovation has ground to a halt of mostly just meh “hey us too” launches. Pricing and design patterns feel increasingly focused on locking you in. AWS folks tell me internally they talk a lot about making sure things are “sticky” with customers. The best engineering talent no longer wants to work there and it shows, especially in places like AI where AWS has just released wave after wave of discombobulated nonsense.

    As a core “rent-a-server” concept with a few add on services there’s still a lot of utility, but AWS is gradually becoming a boring baseline utility with a ton of distracting half baked stuff jammed on top. Most companies I talk to are no longer focused on single cloud and increasingly are bringing a lot of workloads back on prem or in colos. Not everything, but for a lot of stuff that just makes more sense and is a heck of a lot cheaper.

    The chips business in Annapurna is probably the most interesting thing and that plays to its strength of the boring low level infrastructure stuff. Nearly everything AWS tries to do beyond chips and rent-a-server plays is a hot mess.

    AWS isn’t going away, but its future looks a lot less exciting and inspiring than the story that got us to this point.

  • MagicMoonlight 1 hour ago
    These complaints are very weak.

    Lambda is incredibly simple to use, it just runs a function for you.

    Not sure how you could burn so much with dynamodb. It’s serverless and incredibly cheap. Must have been doing something insane like a huge dataset where you scan through it over and over.

    Being salty that Gary couldn’t sell enough of his paid service and AWS is competing with it isn’t a meaningful complaint. I want something in AWS, not on Gary’s servers.

  • h4kunamata 1 hour ago
    AWS AIM is hot garbage, GCP might not be the coolest kid of the block but its AIM rocks.

    AWS CLI??? Holy guacamole, what a mess. AWS CLI looks what is now the digital identification to get the basics done.

    While GCP CLI is like "sure, here"!

    • cube00 1 hour ago
      It's a shame GCP's console and their CLI are both so painfully slow.

      You're also putting your business at risk with Google randomly banning accounts and not providing timely appeals. [1]

      [1]: https://news.ycombinator.com/item?id=45798827

      • vrick 1 hour ago
        I mean this article is about AWS doing the exact same thing.
    • liveoneggs 58 minutes ago
      it's funny how being used to something makes it easier to use
  • _wire_ 1 day ago
    I love you baby, I need you! I'd never cheat on you! Come back!

    Hey good lookin'

    • renticulous 26 minutes ago
      Looks like a blogpost written to get attention and resolve his personal problem.
  • screenstop 3 minutes ago
    [dead]