Confluent Cloud Secure Public Access Proxy
Confluent Cloud Secure Public Access Proxy
Estimated time to complete 20-30 minutes.
Overview
The Zilla Plus for Confluent Cloud Secure Public Access proxy lets authorized Kafka clients connect, publish messages and subscribe to topics in your Confluent Cloud cluster via the internet.
In this guide we will deploy the Zilla Plus for Confluent Cloud Secure Public Access proxy and showcase globally trusted public internet connectivity to a Confluent Cloud cluster from a Kafka client, using the custom wildcard domain *.example.aklivity.io
.
AWS services used
Service | Required | Usage | Quota |
---|---|---|---|
Secrets Manager | Yes | Startup only | Not reached |
Certificate Manager | No Private key and certificate can be inline in Secrets Manager instead | Startup only | Not reached |
Private Certificate Manager | No Private key and certificate can be inline in Secrets Manager instead | Startup only | Not reached |
Default AWS Service Quotas are recommended.
Prerequisites
Before setting up internet access to your Confluent Cloud Cluster, you will need the following:
- an Confluent Cloud Cluster configured for SASL/SCRAM authentication
- subscription to Zilla Plus for Confluent Cloud via AWS Marketplace
- an VPC security group for the Zilla proxies
- an IAM security role for the Zilla proxies
- permission to modify global DNS records for a custom domain
Tips
Check out the Troubleshooting guide if you run into any issues.
Create the Confluent Cloud Cluster in AWS with PrivateLink
This creates your Confluent Cloud cluster with AWS PrivateLink in preparation for secure access via the internet.
An Confluent Cloud cluster deployed in AWS is needed for secure remote access via the internet. The Confluent Cloud Quickstart will walk you through creating one. You can skip this step if you have already created an Confluent Cloud cluster with equivalent configuration. We will use the below resource names to reference the AWS resources needed in this guide.
- Cluster Name:
my-cc-cluster
- Cluster Type:
Enterprise
Your Confluent Cloud Enterprise cluster will need a network connection. You will need to create a new PrivateLink Attachment in network management tab. You will start configuring a new network connection to get a PrivateLink Service Id
- Name:
zilla_plus_secure_public_access
- Add Connection
- Name:
zilla_plus_privatelink_service
- Save the
PrivateLink Service Id
- Name:
Confluent Cloud Enterprise needs an AWS PrivateLink connection, for this we will Create a VPC plus other VPC resources with the below resource names.
- Name tag auto-generation:
my-cce-privatelink
- VPC endpoints:
none
- Create the VPC
Now you will need to Setup AWS PrivateLink with the below resource names.
- Endpoint Name:
my-cce-privatelink-vpce
- Service Name: the
PrivateLink Service Id
value you saved earlier - VPC:
my-cce-privatelink-vpc
- Subnets: Select
public
subnets for each availability zone - Create the Endpoint
Finish the zilla_plus_privatelink_service
connection wizard with the PrivateLink Endpoint ID
found in your my-cce-privatelink-vpce
from the Endpoints table.
Create the Route53 Hosted zone
This creates a Route53 Hosted zone to for a generic DNS record can point to the Confluent Cloud with AWS PrivateLink used by the Zilla proxy.
Follow the Create Hosted Zone wizard with the following parameters and defaults.
- Domain name:
<Region>.aws.private.confluent.cloud
- Type:
Private
- Region:
<CC Cluster Region>
- VPC:
my-cce-privatelink-vpc
- Create the hosted zone
You will need to add an A Record for the wildcard to your zilla_plus_privatelink_service
my-cce-privatelink-vpce
VPC Endpoint has a DNS.
- Record Name:
*
- Record Type:
A
- Alias:
True
- Route Traffic:
Alias to VPC Endpoint
- Region:
<CC Cluster Region>
- Select the
*.vpce-svc-*
DNS address
- Region:
- Routing policy:
Simple Routing
- Evaluate Target Heath:
Yes
- Create the record
Create the Zilla proxy security group
This creates your Zilla proxy security group to allow Kafka clients and SSH access.
A VPC security group is needed for the Zilla proxies when they are launched.
Follow the Create Security Group wizard with the following parameters and defaults. This creates your Zilla proxy security group to allow Kafka clients and SSH access.
Check your selected region
Make sure you have selected the desired region, ex: US East (N. Virginia) us-east-1
.
- Name:
my-zilla-proxy-sg
- VPC:
my-cce-privatelink-vpc
- Description:
Kafka clients and SSH access
- Add Inbound Rule
- Type:
CUSTOM TCP
- Port Range:
9092
- Source type:
Anywhere-IPv4
- Type:
- Add Inbound Rule
- Type:
SSH
- Source type:
My IP
- Type:
- Add Outbound Rule (if not exists)
- Type:
All traffic
- Destination:
Anywhere-IPv4
- Type:
- Create the Security Group
Check your network settings
Your IP may be different when you SSH into the EC2 instance. VPNs and other networking infrastructure may cause the My IP
inbound rule to fail. Instead, you can use one of the other ways AWS provides to execute commands in an EC2 instance.
Navigate to the VPC Management Console Security Groups table. Select the my-zilla-proxy-sg
security group you just created. You will create an inbound rule to allow all traffic inside itself.
- Add Inbound Rule
- Type:
All Traffic
- Source type:
Custom
- Source:
my-zilla-proxy-sg
- Type:
Add the my-zilla-proxy-sg
security group to your VPC Endpoint by finding your my-cce-privatelink-vpce
from the Endpoints table.
- Select your VPC endpoint
Actions
menu > selectManage Security Groups
- Select both security groups:
default
my-zilla-proxy-sg
- Save the changes
Create the Zilla proxy IAM security role
This creates an IAM security role to enable the required AWS services for the Zilla proxies.
Follow the Create IAM Role guide to create an IAM security role with the following parameters:
aklivity-zilla-proxy
AWSCertificateManagerReadOnly
AmazonSSMManagedInstanceCore
IAM role Inline Policies
This creates an IAM security role to enable the required AWS services for the Zilla proxies.
CCProxySecretsManagerRead
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret"
],
"Resource": [
"arn:aws:secretsmanager:*:*:secret:wildcard.example.aklivity.io-*"
]
}
]
}
If you used a different secret name for your certificate key.
Replace wildcard.example.aklivity.io
in the resource regular expression for:
CCProxySecretsManagerRead
Subscribe via AWS Marketplace
The Zilla Plus for Confluent Cloud is available through the AWS Marketplace. You can skip this step if you have already subscribed to Zilla Plus for Confluent Cloud via AWS Marketplace.
To get started, visit the Proxy's Marketplace Product Page and Subscribe
to the offering. You should now see Zilla Plus for Confluent Cloud
listed in your AWS Marketplace subscriptions.
Create the Server Certificate
We need a TLS Server Certificate for your custom DNS wildcard domain that can be trusted by a Kafka Client from anywhere.
Follow the Create Server Certificate (LetsEncrypt) guide to create a new TLS Server Certificate. Use your own custom wildcard DNS domain in place of the example wildcard domain *.example.aklivity.io
.
Info
Note the server certificate secret ARN as we will need to reference it from the Secure Public Access CloudFormation template. Make sure you have selected the desired region, ex: US East (N. Virginia) us-east-1
.
Deploy the Zilla Plus Secure Public Access Proxy
This initiates deployment of the Zilla Plus for Confluent Cloud stack via CloudFormation.
Navigate to your AWS Marketplace subscriptions and select Zilla Plus for Confluent Cloud
to show the manage subscription page.
- From the
Agreement
section >Actions
menu > selectLaunch CloudFormation stack
- Select the
CloudFormation Template
>Secure Public Access
fulfillment option - Make sure you have selected the desired region selected, such as
us-east-1
- Click
Continue to Launch
- Choose the action
Launch CloudFormation
- Choose the action
Click Launch
to complete the Create stack
wizard with the following details:
Step 1. Create Stack
- Prepare template:
Template is ready
- Specify template:
Amazon S3 URL
- Amazon S3 URL:
(auto-filled)
- Amazon S3 URL:
Step 2. Specify stack details
my-zilla-proxy
Parameters:
- Network Configuration
- VPC:
my-cce-privatelink-vpc
- Subnets:
my-cce-privatelink-subnet-public-1a
my-cce-privatelink-subnet-public-1b
- VPC:
- Confluent Cloud Configuration
- Bootstrap server:
<Cluster ID>.<Region>.aws.private.confluent.cloud:9092
*1
- Bootstrap server:
- Zilla Plus Configuration
- Instance count:
2
- Instance type:
t3.small
*2 - Role:
aklivity-zilla-proxy
- Security Groups:
my-zilla-proxy-sg
- Secrets Manager Secret ARN:
<TLS certificate private key secret ARN>
*3 - Public Wildcard DNS:
*.example.aklivity.io
*4 - Public Port:
9092
- Key pair for SSH access:
my-key-pair
*5
- Instance count:
- *Configuration Reference
- Follow the steps in the Test Connectivity to Confluent Cloud docs to get your clusters Bootstrap server URL.
- Consider the network throughput characteristics of the AWS instance type as that will impact the upper bound on network performance.
- This is the ARN of the created secret for the signed certificate's private key that was returned in the last step of the Create Server Certificate (LetsEncrypt) guide. Make sure you have selected the desired region, ex:
US East (N. Virginia) us-east-1
. - Replace with your own custom wildcard DNS pattern.
- Follow the Create Key Pair guide to create a new key pair to access EC2 instances via SSH.
Step 3. Configure stack options: (use defaults)
Step 4. Review
Confirm the stack details are correct and Submit
to start the CloudFormation deploy.
Info
When your Zilla proxy is ready, the CloudFormation console will show CREATE_COMPLETE
for the newly created stack.
Verify Zilla proxy Service
This checks that the services and networking were properly configured.
Navigate to the EC2 running instances dashboard.
Check your selected region
Make sure you have selected the desired region, ex: US East (N. Virginia) us-east-1
.
Select either of the Zilla proxies launched by the CloudFormation template to show the details.
Info
They each have an IAM Role name starting with aklivity-zilla-proxy
.
Find the Public IPv4 Address
and then SSH into the instance.
ssh -i ~/.ssh/<key-pair.cer> ec2-user@<instance-public-ip-address>
After logging in via SSH, check the status of the zilla-plus
system service.
Verify that the zilla-plus
service is active and logging output similar to that shown below.
systemctl status zilla-plus.service
zilla-plus.service - Zilla Plus
Loaded: loaded (/etc/systemd/system/zilla-plus.service; enabled; vendor preset: disabled)
Active: active (running) since...
Check for the active ports with netstat
.
netstat -ntlp
tcp6 0 0 :::9092 :::* LISTEN 1726/.zpm/image/bin
You can get an stdout dump of the zilla-plus.service
using journalctl
.
journalctl -e -u zilla-plus.service | tee -a /tmp/zilla.log
systemd[1]: Started zilla-plus.service - Zilla Plus.
...
All output from cloud-init is captured by default to /var/log/cloud-init-output.log
. There shouldn't be any errors in this log.
cat /var/log/cloud-init-output.log
Cloud-init v. 22.2.2 running 'init'...
Check the networking of the Zilla proxy instances to confluent cloud.
Verify that the instance can resolve the private Route53 DNS address.
nslookup <Cluster ID>.<Region>.aws.private.confluent.cloud
Server: ***
Address: ***
Non-authoritative answer:
Name: <Cluster ID>.<Region>.aws.private.confluent.cloud
Address: ***
Check the communication over necessary ports with netcat
.
nc -vz <Cluster ID>.<Region>.aws.private.confluent.cloud 9092
Connection to <Cluster ID>.<Region>.aws.private.confluent.cloud port 9092 [tcp/italk] succeeded!
Repeat these steps for each of the other Zilla proxies launched by the CloudFormation template if necessary.
Configure Global DNS
This ensures that any new Kafka brokers added to the cluster can still be reached via the Zilla proxy.
When using a wildcard DNS name for your own domain, such as *.example.aklivity.io
then the DNS entries are setup in your DNS provider.
Navigate to the CloudFormation console. Then select the my-zilla-proxy
stack to show the details.
In the stack Outputs
tab, find the public DNS name of the NetworkLoadBalancer
. You need to create a CNAME
record mapping your public DNS wildcard pattern to the public DNS name of the Network Load Balancer.
Info
You might prefer to use an Elastic IP address for each NLB public subnet, providing DNS targets for your CNAME
record that can remain stable even after restarting the stack.
For testing purposes you can edit your local /etc/hosts
file instead of updating your DNS provider.
Verify Kafka Client Connectivity
To verify that we have successfully enabled public internet connectivity to our Kafka cluster from the local development environment, we will use a generic Kafka client to create a topic, publish messages and then subscribe to receive these messages from our Kafka cluster via the public internet.
Install the Kafka Client
First, we must install a Java runtime that can be used by the Kafka client.
sudo yum install java-1.8.0
Now we are ready to install the Kafka client:
wget https://archive.apache.org/dist/kafka/2.8.0/kafka_2.13-2.8.0.tgz
tar -xzf kafka_2.13-2.8.0.tgz
cd kafka_2.13-2.8.0
Tips
We use a generic Kafka client here, however the setup for any Kafka client, including KaDeck, Conduktor, and akhq.io will be largely similar. With the Zilla proxy you can use these GUI Kafka clients to configure and monitor your Kafka applications, clusters and streams.
Configure the Kafka Client
With the Kaka client now installed we are ready to configure it and point it at the Zilla proxy.
The Zilla proxy relies on encrypted SASL/SCRAM so we need to create a file called confluent.properties
that tells the Kafka client to use SASL_SSL
as the security protocol with SCRAM-SHA-512 encryption.
Notice we used the username and password, you will need to replace those with your own API Keys credentials. Follow the Use API Keys to Control Access in Confluent Cloud to associate your API key and secret for your cluster to the SASL_SSL
username and password.
# Required connection configs for Kafka producer, consumer, and admin
bootstrap.servers=kafka.example.aklivity.io:9092
#bootstrap.servers=lkc-0d9ox2.us-east-1.aws.private.confluent.cloud:9092
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='<cluser-api-key>' password='<cluser-api-secret>';
sasl.mechanism=PLAIN
# Required for correctness in Apache Kafka clients prior to 2.6
client.dns.lookup=use_all_dns_ips
Tips
As the TLS certificate is signed by a globally trusted certificate authority, there is no need to configure your Kafka client to override the trusted certificate authorities.
Test the Kafka Client
This verifies internet connectivity to your Confluent Cloud cluster via Zilla Plus for Confluent Cloud.
We can now verify that the Kafka client can successfully communicate with your Confluent Cloud cluster via the internet from your local development environment to create a topic, then publish and subscribe to the same topic.
Warning
Replace these TLS bootstrap server names accordingly for your own custom wildcard DNS pattern.
Create a Topic
Use the Kafka client to create a topic called zilla-proxy-test
, replacing <tls-bootstrap-server-names>
in the command below with the TLS proxy names of your Zilla proxy:
bin/kafka-topics.sh \
--create \
--topic zilla-plus-test \
--partitions 3 \
--replication-factor 3 \
--command-config confluent.properties \
--bootstrap-server <tls-bootstrap-server-names>
A quick summary of what just happened
- The Kafka client with access to the public internet issued a request to create a new topic
- This request was directed to the internet-facing Network Load Balancer
- The Network Load Balancer forwarded the request to the Zilla proxy
- The Zilla proxy routed the request to the appropriate Confluent Cloud broker
- The topic was created in the Confluent Cloud broker
- Public access was verified
Publish messages
Publish two messages to the newly created topic via the following producer command:
bin/kafka-console-producer.sh \
--topic zilla-plus-test \
--producer.config confluent.properties \
--broker-list <tls-bootstrap-server-names>
A prompt will appear for you to type in the messages:
>This is my first event
>This is my second event
Receive messages
Read these messages back via the following consumer command:
bin/kafka-console-consumer.sh \
--topic zilla-plus-test \
--from-beginning \
--consumer.config confluent.properties \
--bootstrap-server <tls-bootstrap-server-names>
You should see the This is my first event
and This is my second event
messages.
This is my first event
This is my second event
Monitor the Zilla proxy
Follow the Monitoring the Zilla proxy instructions
Upgrade the Zilla proxy
Follow the Upgrading the Zilla proxy instructions