bhaktatejas922 4 hours ago

We just doubled our speculative code edit throughput and hit 10k tok/sec per request!

Morph now merges code at 10,500 tokens/sec — roughly 4× faster than the best speeds on Cerebras.

That kind of speed makes previously impractical workloads trivial: applying complex edits across a 40k-token document now takes under 4 seconds. This isn’t a vanity metric - we think it unlocks an entirely new domain of AI use cases where codebases, configs, or long documents can be semantically edited in real time.

Morph is a Fast Apply model dedicated to merging edits from frontier LLMs We want to enable developers to build realtime interfaces with AI

  • NitpickLawyer 3 hours ago

    Help me understand. Is this for cases where you have a file and you "ask" an LLM to change something, and they reply in chat mode with something like < //--unchanged code \n changed line \n changed line \n //----remaining code unchanged > ?

    If so, isn't this flow like 6mo old, and not really used anymore? The latest tools (terminal based and vscode extensions like cline/roo/kilo) already support "diff edits", where the model outputs a diff format that the tool speaks. I get "instant" edits that way, right in my IDE, and model support has been great (gpt5,claude4,gemini2.5,grok-fast-1, etc.)

    So what's the use case of this model, then? Cool technical results, and congrats, but it seems the "field" has already solved for this particular problem?

    • bhaktatejas922 2 hours ago

      https://morphllm.com/benchmarks

      Using fast apply is more reliable at first pass and is faster/cheaper. You prompt your agent to output in a lazy format and our model learns hoe to merge it in.

      The ides listed typically do turn based search and replace which uses more tokens and takes longer

      Kilo supports morph as well!

anon191928 3 hours ago

SEC ? they will be against this. They have been against financial innovation and if they see this they will be against this too. SEC is special. sec is ok