Connect your Apache Kafka clusters to enable Alex (Cloud Engineer) and Tony (Database Engineer) to monitor topic health, analyze consumer lag, and optimize streaming performance.
Select your Kafka platform for specific connection instructions:
Self-hosted Kafka
Confluent Cloud
For local development or secure internal networks using unauthenticated Kafka (e.g., Confluent Local via Docker), no server-side configuration or user creation is required.
1
Ensure Network Access
Ensure the CloudThinker application can reach your Kafka broker at <broker-name>.<your-domain>:9092.
2
Add Connection in CloudThinker
Navigate to Connections → Kafka in the CloudThinker dashboard and enter your endpoints. No credentials are required for unauthenticated clusters.
Schema Registry is only required if you are actively using it to manage data schemas.
1
Open Confluent Cloud and pick your environment
Go to confluent.cloud/home, then open Environments.Click the environment you want to connect.The environment ID appears in the URL after you select it (for example, env-xxxxx).Example navigation:
Inside the selected environment, open Clusters and click your target cluster (for example, <cluster-name>).Collect:
BOOTSTRAP_SERVERS
KAFKA_REST_ENDPOINT
KAFKA_CLUSTER_ID
Keep KAFKA_ENV_ID as the selected environment ID from Step 1.
3
Create scoped API keys and secrets
Go to confluent.cloud/settings/api-keys and click + Add API Key.Choose Service Account for production workloads, or My Account for development/testing.Select the desired scope in Confluent onboarding, then save the generated API key and API secret pair.Scopes you may create keys for:
Kafka cluster
Schema Registry
ksqlDB cluster
Flink region
Cloud resource management
Tableflow
4
Get Schema Registry endpoint (optional)
In the selected environment, open Stream Governance -> Schema Registry.Collect:
In the selected environment, open Flink.Open Compute pools and create a pool with + Add compute pool if needed.Click the target compute pool and collect:
FLINK_COMPUTE_POOL_ID
FLINK_ENV_ID (same environment ID from URL)
URL pattern example:
https://confluent.cloud/environments/<env-id>/flink/pools/<compute-pool-id>/overviewSet FLINK_REST_ENDPOINT from your cloud provider and region (AWS, Azure, or GCP; for example <region-code>).
In CloudThinker, navigate to Connections -> Kafka.Paste the fields for the scopes you enabled.You can leave optional scope fields empty and add them later.
Confluent Cloud uses scope-based API credentials. Each API key and secret pair grants access to a specific resource scope.You can start with Kafka-only fields, then add Schema Registry, Flink, Cloud API, or Tableflow fields later.
Back in the CloudThinker App, navigate to Connections → Kafka, add your bootstrap servers and authentication details, and finish.For Confluent Cloud, you can leave optional scope fields empty and add them later.
@alex check consumer lag for the orders-service group@alex identify under-replicated partitions@tony analyze message throughput trends for the events topic@tony check data retention policies across all topics
Verify the Kafka cluster is running and accepting connections.
Check that the broker port (default 9092) is open and not blocked by firewall.
Verify bootstrap servers are correct and reachable from CloudThinker.
Confluent Cloud partial scope fields
When using partial scope onboarding, remove the entire key-value pair for unused scopes. Do not leave empty strings.Correct (Kafka-only, Schema Registry removed entirely):
For production, prefer Service Account keys over personal keys.
CloudThinker supports partial scope onboarding. If you only provide Kafka scope fields first, you can still create the connection and add Schema Registry, Flink, Cloud API, or Tableflow credentials later.