After a long AI-driven coding session in Cursor, getting a "second pair of eyes" on the generated code is crucial. Manually reviewing everything can be time-consuming. This lesson demonstrates how to use the CodeRabbit VS Code extension for an initial AI-powered code review, and then leverage Cursor's Agent to intelligently address the identified issues, even integrating test runs into the AI conversation.
Workflow demonstrated in this lesson:
- Install and use the CodeRabbit extension to initiate an AI code review on your recent changes.
- Examine potential issues and suggestions surfaced by CodeRabbit within the VS Code sidebar.
- Apply CodeRabbit's direct code suggestions or use its "Fix with AI" feature.
- Copy CodeRabbit's "Fix with AI" instructions into Cursor's Agent for more interactive and context-aware fixes.
- Provide additional context to Agent, such as specific test commands (e.g., from Atuin command history), to guide the AI in fixing code and ensuring tests pass.
- Iteratively work with Agent and CodeRabbit to refine code, apply fixes, and verify changes.
- Stage changes with Git to easily revert or compare AI-suggested modifications.
Key benefits:
- Automated First-Pass Review: Quickly get AI-driven feedback on your code changes.
- Integrated AI Fixing: Seamlessly transition from CodeRabbit's suggestions to Cursor's Agent for more nuanced problem-solving.
- Contextual Debugging with AI: Provide specific test commands and error messages to Agent for targeted fixes.
- Iterative Refinement: Use the AI tools in a loop to address issues, test, and improve code quality.
- Improved Confidence: Validate AI-generated and AI-assisted changes by running tests as part of the workflow.
By combining CodeRabbit's review capabilities with Cursor's Agent, you can create a more efficient and reliable workflow for reviewing, fixing, and testing code generated or modified with AI assistance.