8 comments

  • thelastgallon 25 minutes ago
    > On many task lengths (including those near their plateau) they cost 10 to 100 times as much per hour. For instance, Grok 4 is at $0.40 per hour at its sweet spot, but $13 per hour at the start of its final plateau. GPT-5 is about $13 per hour for tasks that take about 45 minutes, but $120 per hour for tasks that take 2 hours. And o3 actually costs $350 per hour (more than the human price) to achieve tasks at its full 1.5 hour task horizon. This is a lot of money to pay for an agent that fails at the task you’ve just paid for 50% of the time — especially in cases where failure is much worse than not having tried at all.
  • quicklywilliam 1 hour ago
    Interesting read. I don't know if I quite buy the evidence, but it's definitely enough to warrant further investigation. It also matches up with my personal experience, which is that tools like Claude Code are burning through more and more tokens as we push them to do bigger and bigger work. But we all know the frontier model companies are burning through money in an unsustainable race to get you and your company hooked on their tools.

    So: I buy that the cost of frontier performance is going up exponentially, but that doesn't mean there is a fundamental link. We also know that benchmark performance of much smaller/cheaper models has been increasing (as far as I know METR only looks at frontier models), so that makes me wonder if the exponential cost/time horizon relationship is only for the frontier models.

  • agentifysh 1 hour ago
    Until there is some drastic new hardware, we are going to see a similar situation to proof of work, where a small group hordes the hardware and can collude on prices.

    Difference is that the current prices have a lot of subsidies from OPM

    Once the narrative changes to something more realistic, I can see prices increase across the board, I mean forget $200/month for codex pro, expect $1000/month or something similar.

    So its a race between new supply of hardware with new paradigm shifts that can hit market vs tide going out in the financial markets.

    • colechristensen 54 minutes ago
      Doubtful, local models are the competitive future that will keep prices down.

      128GB is all you need.

      A few more generations of hardware and open models will find people pretty happy doing whatever they need to on their laptop locally with big SOTA models left for special purposes. There will be a pretty big bubble burst when there aren't enough customers for $1000/month per seat needed to sustain the enormous datacenter models.

      Apple will win this battle and nvidia will be second when their goals shift to workstations instead of servers.

      • lookaround 14 minutes ago
        > 128GB is all you need.

        My guy, look around.

        They are coming for personal compute.

        Where are you going to get these 128GBs? Aquaman? [0]

        The ones who make RAM are inexplicably attaching their fate to the future being all LLMs only everywhere.

        [0] https://www.youtube.com/watch?v=0-w-pdqwiBw

        • foota 5 minutes ago
          More like RAM producers are providing supplies to the highest bidder, no? If this doesn't peter out supply will normalize at a higher but less insane price eventually.
        • naveen99 7 minutes ago
          Cloud can’t make money off of you and pay more than you for the hardware at the same time.
  • dang 4 hours ago
    Related ongoing thread:

    Measuring Claude 4.7's tokenizer costs - https://news.ycombinator.com/item?id=47807006 (309 comments)

  • matt3210 54 minutes ago
    I took a month break and my side project took 2x as much tokens
  • greenmilk 2 hours ago
    Are any inference providers currently making profit (on inference, I know google makes money)?
    • henry2023 10 minutes ago
      Third parties selling open-weight inference on OpenRouter are surely selling on a profit. Zero reason to subsidize it.
    • wsun19 1 hour ago
      Pretty much every major American inference provider claims to make a profit on API-based inference. Consumer plans might be subsidized overall, but it's hard to say since they're a black box and some consumers don't fully use their plans
    • wavemode 1 hour ago
      Selling inference is not fundamentally different from selling compute - you amortize the lifetime cost of owning and operating the GPUs and then turn that into a per-token price. The risk of loss would be if there is low demand (and thus your facilities run underutilized), but I doubt inference providers are suffering from this.

      Where the long-term payoff still seems speculative, is for companies doing training rather than just inference.

      • Gigachad 1 hour ago
        There’s a lot of debate over what the useful lifespan of the hardware is though. A number that seems very vibes based determines if these datacenters are a good investment or disastrous.
    • jagged-chisel 2 hours ago
      Google definitely makes money in other areas. Do they make money on inference?
  • srslyTrying2hlp 2 hours ago
    [dead]
  • totalmarkdown 2 hours ago
    [flagged]