> To streamline the process while maintaining quality, we built our own internal evaluation tool: a Git-style diff viewer with annotation tools for categorizing merge outcomes.
vibecoding internal eval tools is the single best use case of ai accelerating ai i know of! nice to see
(sorry if this gets asked a lot) - any philsophical/methodology differences to MorphLLM that you'd call out since you seem to be a direct alternative?
Hey, happy to answer! The manual evals we did showed that both morph-v3-fast and morph-v3-large had significantly more smoothing and hallucination behaviors.
It's hard to know for sure because their methods aren't public, but my guess is the dataset they constructed pushes the Fast Apply model to more aggressively fix mistakes introduced by the frontier model in the edit snippet.
This aligns with the fact that their flagship model (morph-v3-large) is 4x slower than ours -- the smoothings/hallucinations are not in the initial code or the edit snippet so they break speculative continuations more frequently. Their 2x faster model (morph-v3-fast) is likely quantized more aggressively (maybe fp4? and run on B200s?) because it exhibits very strange behaviors like hallucinating invalid characters at random points that make the code non-compilable.
From an accuracy POV, auto-smoothing is helpful for fixing obvious mistakes in the edit snippet like missed imports from well known packages. However, it does increase the frequency of code breaking hallucinations like invalid local imports among other functional changes that you might not want a small apply model to perform.
> To streamline the process while maintaining quality, we built our own internal evaluation tool: a Git-style diff viewer with annotation tools for categorizing merge outcomes.
vibecoding internal eval tools is the single best use case of ai accelerating ai i know of! nice to see
(sorry if this gets asked a lot) - any philsophical/methodology differences to MorphLLM that you'd call out since you seem to be a direct alternative?
Hey, happy to answer! The manual evals we did showed that both morph-v3-fast and morph-v3-large had significantly more smoothing and hallucination behaviors.
It's hard to know for sure because their methods aren't public, but my guess is the dataset they constructed pushes the Fast Apply model to more aggressively fix mistakes introduced by the frontier model in the edit snippet.
This aligns with the fact that their flagship model (morph-v3-large) is 4x slower than ours -- the smoothings/hallucinations are not in the initial code or the edit snippet so they break speculative continuations more frequently. Their 2x faster model (morph-v3-fast) is likely quantized more aggressively (maybe fp4? and run on B200s?) because it exhibits very strange behaviors like hallucinating invalid characters at random points that make the code non-compilable.
From an accuracy POV, auto-smoothing is helpful for fixing obvious mistakes in the edit snippet like missed imports from well known packages. However, it does increase the frequency of code breaking hallucinations like invalid local imports among other functional changes that you might not want a small apply model to perform.