启动 HN: mrge.io (YC X25) – 代码审查的光标
嘿,HN,我们正在构建 mrge(<a href="https://www.mrge.io/home">https://www.mrge.io/home</a>),这是一个人工智能代码审查平台,旨在帮助团队更快地合并代码,减少错误。我们的早期用户包括 Better Auth、Cal.com 和 n8n——这些团队每天处理大量的 PR。
<p>这是一个演示视频:<a href="https://www.youtube.com/watch?v=pglEoiv0BgY" rel="nofollow">https://www.youtube.com/watch?v=pglEoiv0BgY</a>
<p>我们(Allis 和 Paul)是工程师,在上一家初创公司合作时遇到了这个问题。代码审查迅速成为我们最大的瓶颈——尤其是在我们开始使用人工智能进行编码之后。我们需要审查的 PR 增加了,细微的 AI 编写的错误常常被忽视,而我们(人类)越来越多地发现自己在没有深入理解更改的情况下就草率地批准了 PR。
<p>我们正在构建 mrge 来帮助解决这个问题。以下是它的工作原理:
<p>1. 通过我们在 GitHub 的应用程序,轻松连接您的 GitHub 仓库(仅需两次点击),并可选择下载我们的桌面应用程序。我们计划在未来支持 GitLab!
<p>2. AI 审查:当您打开一个 PR 时,我们的 AI 会在一个临时且安全的容器中直接审查您的更改。它不仅了解该 PR 的上下文,还能掌握整个代码库,因此可以识别模式并直接在更改的行上留下评论。一旦审查完成,沙箱会被拆除,您的代码也会被删除——出于显而易见的原因,我们不会存储它。
<p>3. 友好的人工审查工作流程:进入我们的网页应用(它类似于 Linear,但用于 PR)。更改会被逻辑分组(而不是按字母顺序),重要的差异会被突出显示、可视化,并准备好进行更快速的人类审查。
<p>AI 审查器的工作方式有点像 Cursor,因为它使用开发人员相同的工具来导航您的代码库——例如跳转到定义或在代码中查找。
<p>但一个大挑战是,与 Cursor 不同,mrge 并不在您的本地 IDE 或编辑器中运行。我们必须在云端完全重建类似的功能。
<p>每当您打开一个 PR 时,mrge 会克隆您的仓库并在一个安全且隔离的临时沙箱中检出您的分支。我们为这个沙箱提供了 shell 访问权限和一个语言服务器协议(LSP)服务器。然后,AI 审查器会审查您的代码,像人类审查员一样导航代码库——使用 shell 命令和常见的编辑器功能,如“转到定义”或“查找引用”。审查完成后,我们立即拆除沙箱并删除代码——出于显而易见的原因,我们不想永久存储它。
<p>我们知道基于云的审查并不适合每个人,特别是当安全或合规要求本地部署时。但云端的方法使我们能够在没有本地 GPU 设置的情况下运行最先进的 AI 模型,并为整个团队提供一致的、单一的 AI 审查。
<p>该平台本身完全专注于让 <i>人类</i> 代码审查变得更容易。一个重要的灵感来自于以生产力为中心的应用程序,如 Linear 或 Superhuman,这些产品展示了深思熟虑的设计如何影响日常工作流程。我们希望将这种感觉带入代码审查中。
<p>这也是我们构建桌面应用程序的原因之一。它使我们能够提供更精致的体验,配备键盘快捷键和流畅的界面。
<p>除了性能外,我们最关心的就是让人类更容易阅读和理解代码。例如,传统的审查工具按字母顺序对更改的文件进行排序——这迫使审查员弄清楚他们应该以什么顺序审查更改。在 mrge 中,文件会根据逻辑关系自动分组和排序,让审查员能够立即进入。
<p>我们认为编码的未来并不是让 AI 替代人类,而是为我们提供更好的工具,以快速理解高层次的更改,逐步抽象出更多的代码。随着代码量的不断增加,这一转变将变得越来越重要。
<p>您现在可以注册(<a href="https://www.mrge.io/home">https://www.mrge.io/home</a>)。mrge 目前是免费的,因为我们仍处于早期阶段。我们未来的计划是对闭源项目按座位收费,并继续对开源项目免费提供 mrge。
<p>我们正在积极开发,非常希望听到您的诚实反馈!
查看原文
Hey HN, we’re building mrge (<a href="https://www.mrge.io/home">https://www.mrge.io/home</a>), an AI code review platform to help teams merge code faster with fewer bugs. Our early users include Better Auth, Cal.com, and n8n—teams that handle a lot of PRs every day.<p>Here’s a demo video: <a href="https://www.youtube.com/watch?v=pglEoiv0BgY" rel="nofollow">https://www.youtube.com/watch?v=pglEoiv0BgY</a><p>We (Allis and Paul) are engineers who faced this problem when we worked together at our last startup. Code review quickly became our biggest bottleneck—especially as we started using AI to code more. We had more PRs to review, subtle AI-written bugs slipped through unnoticed, and we (humans) increasingly found ourselves rubber-stamping PRs without deeply understanding the changes.<p>We’re building mrge to help solve that. Here’s how it works:<p>1. Connect your GitHub repo via our Github app in two clicks (and optionally download our desktop app). Gitlab support is on the roadmap!<p>2. AI Review: When you open a PR, our AI reviews your changes directly in an ephemeral and secure container. It has context into not just that PR, but your whole codebase, so it can pick up patterns and leave comments directly on changed lines. Once the review is done, the sandbox is torn down and your code deleted – we don’t store it for obvious reasons.<p>3. Human-friendly review workflow: Jump into our web app (it’s like Linear but for PRs). Changes are grouped logically (not alphabetically), with important diffs highlighted, visualized, and ready for faster human review.<p>The AI reviewer works a bit like Cursor in the sense that it navigates your codebase using the same tools a developer would—like jumping to definitions or grepping through code.<p>But a big challenge was that, unlike Cursor, mrge doesn’t run in your local IDE or editor. We had to recreate something similar entirely in the cloud.<p>Whenever you open a PR, mrge clones your repository and checks out your branch in a secure and isolated temporary sandbox. We provision this sandbox with shell access and a Language Server Protocol (LSP) server. The AI reviewer then reviews your code, navigating the codebase exactly as a human reviewer would—using shell commands and common editor features like "go to definition" or "find references". When the review finishes, we immediately tear down the sandbox and delete the code—we don’t want to permanently store it for obvious reasons.<p>We know cloud-based review isn't for everyone, especially if security or compliance requires local deployments. But a cloud approach lets us run SOTA AI models without local GPU setups, and provide a consistent, single AI review per PR for an entire team.<p>The platform itself focuses entirely on making <i>human</i> code reviews easier. A big inspiration came from productivity-focused apps like Linear or Superhuman, products that show just how much thoughtful design can impact everyday workflows. We wanted to bring that same feeling into code review.<p>That’s one reason we built a desktop app. It allowed us to deliver a more polished experience, complete with keyboard shortcuts and a snappy interface.<p>Beyond performance, the main thing we care about is making it easier for humans to read and understand code. For example, traditional review tools sort changed files alphabetically—which forces reviewers to figure out the order in which they should review changes. In mrge, files are automatically grouped and ordered based on logical connections, letting reviewers immediately jump in.<p>We think the future of coding isn’t about AI replacing humans—it’s about giving us better tools to quickly understand high-level changes, abstracting more and more of the code itself. As code volume continues to increase, this shift is going to become increasingly important.<p>You can sign up now (<a href="https://www.mrge.io/home">https://www.mrge.io/home</a>). mrge is currently free while we're still early. Our plan for later is to charge closed-source projects on a per-seat basis, and to continue giving mrge away for free to open source ones.<p>We’re very actively building and would love your honest feedback!