Local AI Code Reviews with the CodeRabbit Extension in Cursor

After a long AI-driven coding session in Cursor, getting a "second pair of eyes" on the generated code is crucial. Manually reviewing everything can be time-consuming. This lesson demonstrates how to use the CodeRabbit VS Code extension for an initial AI-powered code review, and then leverage Cursor's Agent to intelligently address the identified issues, even integrating test runs into the AI conversation.

Workflow demonstrated in this lesson:

  • Install and use the CodeRabbit extension to initiate an AI code review on your recent changes.
  • Examine potential issues and suggestions surfaced by CodeRabbit within the VS Code sidebar.
  • Apply CodeRabbit's direct code suggestions or use its "Fix with AI" feature.
  • Copy CodeRabbit's "Fix with AI" instructions into Cursor's Agent for more interactive and context-aware fixes.
  • Provide additional context to Agent, such as specific test commands (e.g., from Atuin command history), to guide the AI in fixing code and ensuring tests pass.
  • Iteratively work with Agent and CodeRabbit to refine code, apply fixes, and verify changes.
  • Stage changes with Git to easily revert or compare AI-suggested modifications.

Key benefits:

  • Automated First-Pass Review: Quickly get AI-driven feedback on your code changes.
  • Integrated AI Fixing: Seamlessly transition from CodeRabbit's suggestions to Cursor's Agent for more nuanced problem-solving.
  • Contextual Debugging with AI: Provide specific test commands and error messages to Agent for targeted fixes.
  • Iterative Refinement: Use the AI tools in a loop to address issues, test, and improve code quality.
  • Improved Confidence: Validate AI-generated and AI-assisted changes by running tests as part of the workflow.

By combining CodeRabbit's review capabilities with Cursor's Agent, you can create a more efficient and reliable workflow for reviewing, fixing, and testing code generated or modified with AI assistance.

Share with a coworker

Transcript

[00:00] After a long conversation is finished and you need an extra set of eyes on all of the code changes, if you install an extension called CodeRabbit, which you can just find in the marketplace, I'm going to go ahead and focus on the CodeRabbit view here and it's automatically prompting me to review all of the changes. You can also invoke this straight from the command shift P menu and just say start review, hit enter, go grab a snack because it's going to take a couple of minutes, and then once the review is complete if it finds any potential issues like this for and an exclamation point. You can dig into them. Let's make this a little bigger and click on the individual comments and summaries that we may need to address. It will make a code suggestion for you where you can automatically click apply suggested change.

[00:45] And I'm actually really gonna have to think about this one depending on the behavior I want. So I'm going to at least stage my changes here under Git so that once I apply any of these I can easily discard anything. So we can check on the other comments and we'll go ahead and accept this one as well. We'll check on this timing issue. And for this one, just to show it off, this time we're going to say fix with AI.

[01:11] And see these instructions are copied to the clipboard. So I could take them over to a new composer session with command-I, command-N, and hit paste. And this way instead of taking CodeRabbit's code snippet, I could give additional instructions or I could pick the smartest model possible and this way allow it to fix it with more feedback and interaction from me. And for example in my project I have tests covering all of this, so I'm going to grab my specific test command, I'm going to press up, and it's this line here. This command history tool is called atwin.

[01:45] You can go ahead and Google that if you're interested. And I'm gonna hit ctrl-y to copy that command and I can paste it here. I find myself using that workflow a lot where I can find something from my command history and bring it into an agent and just say yes please run these tests. And then we can verify if this fix that we're applying broke anything. Now I'll ask it to please fix this error.

[02:05] This does look like a scenario where the code review caught something in this file but it didn't understand the impact that would have on the tests. So when I clicked accept, while it may have been a legitimate fix and my tests might have been wrong, it just didn't catch the impact of their fix. So I'll continue with yes please fix and I'm just going to jump over to the comments. So let's grab this comment, scroll down a bit, find it. I'm going to go ahead and accept this And then obviously all of these steps are where you have to start using your brain to see what the AI did in your conversation and what the AI is saying didn't quite go as planned in the conversation.