What You’ll Set Up
By the end of this tutorial, every pull request in your connected repositories will automatically receive AI-powered code review comments — detecting bugs, security vulnerabilities, and best-practice violations before code reaches production.Navigate to Code Review Settings
Go to Settings > Code Review in your CloudThinker workspace.You’ll see options to connect GitHub or GitLab repositories.
Connect GitHub
Click Connect GitHub to install the CloudThinker GitHub App.
You need Organization Owner permissions to install the GitHub App. If you don’t have access, ask your org admin to approve the installation.
- Select the GitHub organization
- Choose which repositories to grant access (all or selected)
- Authorize the app
Connect GitLab (Alternative)
For GitLab, you have two authentication options:Option A: OAuth (recommended)
- Click Connect GitLab
- Authorize via OAuth flow
- Select projects to monitor
- Generate a Project Access Token or Group Access Token in GitLab
- Paste it in CloudThinker settings
- Select the projects to monitor
Select Repositories
After connecting, you’ll see a list of available repositories. Toggle on the ones you want CloudThinker to review.For each repository, you can configure:
- Auto-review: Automatically review every new PR (recommended)
- Languages: Which file types to analyze
- Severity threshold: Minimum severity to comment on
Open a Pull Request
Create or open a pull request in one of your connected repositories. CloudThinker will automatically:
- Detect the new PR
- Analyze the changed files
- Post inline review comments on specific lines
- Provide a summary comment with overall findings
- Bug detection: Logic errors, null references, race conditions
- Security vulnerabilities: Injection risks, hardcoded secrets, insecure patterns
- Code quality: Naming conventions, complexity, duplication
- Performance: Inefficient queries, unnecessary allocations, N+1 patterns
How It Works
Track Team Performance with Leaderboard
Once your team has a few reviewed PRs, go to Code Review > Leaderboard to see how everyone is performing. The Leaderboard scores each developer by balancing Quality (AI review scores) and Impact (code complexity) — so it rewards engineers who ship robust code, not just those who ship the most lines.| Score | Meaning |
|---|---|
| = 1.0 | Exactly at team average |
| > 1.0 | Above average (top performer) |
| < 1.0 | Below team average |
What to Look For
- High Quality + High Impact: Your top performers — ideal mentors and lead reviewers
- High Impact + Low Quality: Possible burnout signal — shipping fast but cutting corners
- High Quality + Low Impact: May be stuck on a hard problem or under-utilized
- Uneven Impact distribution: High “Bus Factor” risk — knowledge concentrated in one person
Leaderboard Scoring Details
Deep dive into the scoring formula, impact calculation, and example calculations
Tips
- Start with a pilot repo: Connect one active repository first to see the review quality before rolling out broadly
- Tune severity thresholds: If reviews are too noisy, increase the minimum severity to High or Critical
- Review the Leaderboard weekly: Track quality trends and workload balance across your team
Next Step
CloudKeepers
Set up autonomous monitoring and compliance scanning