Geon has recently concluded development of a prototype capability for running REDHAWK components on popular container orchestration solutions like Docker Swarm and Kubernetes. This work has been submitted as pull request #17 against the REDHAWK 2.2.8 release. Unlike earlier efforts by the community to enable container-based capability deployment in REDHAWK by simply having the GPP
device run containers using the docker commandline, this work directly adds a plugin architecture to the REDHAWK DomainManager
to support native interaction with docker, docker-swarm, kubernetes, and custom cloud-management APIs. This means that components can be deployed without a GPP
device or associated DeviceManager
. In the near future this work will be extended to launch entire REDHAWK waveforms in a container with no modifications. For now you will have to use the tools and resources avaialable at geontech/docker-redhawk-components to wrap your components in a container for deployment using the new docker and K8s DomainManager plugins.
Getting started with Kubernetes and REDHAWK can be challenging so we’ve put together a 3 part series to help interested parties get up and running with this new container aware version of REDHAWK. To avoid peculiarities of on-premises clusters, we will demonstrate a complete setup using AWS EKS.
Creation of a VPC and EKS cluster within Amazon Web Services (AWS)
VPC Script
#!/bin/bash
echo "This script will build out a VPC and associated resources to house your EKS cluster"
if ! command -v jq &> /dev/null
then
echo "jq is not installed or on your \$PATH. Please ensure the jq program is available for this script prior to continuing."
exit
else
echo "jq found at `which jq`. Proceeding."
fi
if ! command -v aws &> /dev/null
then
echo "aws is not installed or on your \$PATH. Please ensure the aws program is available for this script prior to continuing."
exit
else
echo "aws found at `which aws`. Proceeding."
fi
# Set initial environment variables
echo "Setting environment variables"
export NAME="redhawk"
export VPC_NAME="$NAME-vpc"
export CLUSTER_NAME="$NAME-cluster"
export SG_NAME="$NAME-resolver-sg"
export CIDR_BLOCK="10.20.0.0/16"
export public_subnet_cidr="10.20.0.0/24"
export private_subnet1_cidr="10.20.1.0/24"
export private_subnet2_cidr="10.20.2.0/24"
export private_subnet3_cidr="10.20.3.0/24"
export RESOLVER_IP1=10.20.1.10
export RESOLVER_IP2=10.20.2.10
export public_subnet_az="us-gov-west-1a"
export private_subnet1_az="us-gov-west-1a"
export private_subnet2_az="us-gov-west-1b"
export private_subnet3_az="us-gov-west-1c"
# Create VPC
echo "Creating VPC"
aws_response=$(aws ec2 create-vpc --cidr-block $CIDR_BLOCK --output json)
vpc_id=$(echo -e "$aws_response" | jq '.Vpc.VpcId' | tr -d '"')
aws ec2 create-tags --resources "$vpc_id" --tags "Key=Name,Value=$VPC_NAME"
aws ec2 modify-vpc-attribute --vpc-id $vpc_id --enable-dns-support "{\"Value\":true}"
aws ec2 modify-vpc-attribute --vpc-id $vpc_id --enable-dns-hostnames "{\"Value\":true}"
# Create Subnets
echo "Creating subnets"
sub_response=$(aws ec2 create-subnet --vpc-id $vpc_id --cidr-block $public_subnet_cidr --availability-zone $public_subnet_az --output json)
public_subnet_id=$(echo -e "$sub_response" | jq '.Subnet.SubnetId' | tr -d '"')
aws ec2 modify-subnet-attribute --subnet-id $public_subnet_id --map-public-ip-on-launch
aws ec2 create-tags --resources $public_subnet_id --tags "Key=Name,Value=$VPC_NAME-public"
sub_response=$(aws ec2 create-subnet --vpc-id $vpc_id --cidr-block $private_subnet1_cidr --availability-zone $private_subnet1_az --output json)
private_subnet1_id=$(echo -e "$sub_response" | jq '.Subnet.SubnetId' | tr -d '"')
aws ec2 create-tags --resources $private_subnet1_id --tags "Key=Name,Value=$VPC_NAME-private1"
sub_response=$(aws ec2 create-subnet --vpc-id $vpc_id --cidr-block $private_subnet2_cidr --availability-zone $private_subnet2_az --output json)
private_subnet2_id=$(echo -e "$sub_response" | jq '.Subnet.SubnetId' | tr -d '"')
aws ec2 create-tags --resources $private_subnet2_id --tags "Key=Name,Value=$VPC_NAME-private2"
sub_response=$(aws ec2 create-subnet --vpc-id $vpc_id --cidr-block $private_subnet3_cidr --availability-zone $private_subnet3_az --output json)
private_subnet3_id=$(echo -e "$sub_response" | jq '.Subnet.SubnetId' | tr -d '"')
aws ec2 create-tags --resources $private_subnet3_id --tags "Key=Name,Value=$VPC_NAME-private3"
# Tag Subnets
echo "Tagging subnets"
aws ec2 create-tags --resources $private_subnet1_id $private_subnet2_id $private_subnet3_id --tags "Key=kubernetes.io/cluster/$CLUSTER_NAME,Value=shared" "Key=kubernetes.io/role/internal-elb,Value=1"
# Create Gateways
echo "Creating gateways"
igw_response=$(aws ec2 create-internet-gateway --output json)
igw_id=$(echo -e "$igw_response" | jq '.InternetGateway.InternetGatewayId' | tr -d '"')
attach_response=$(aws ec2 attach-internet-gateway --internet-gateway-id $igw_id --vpc-id $vpc_id --output json)
eip_response=$(aws ec2 allocate-address)
eip_id=$(echo -e "$eip_response" | jq '.AllocationId' | tr -d '"')
ngw_response=$(aws ec2 create-nat-gateway --allocation-id $eip_id --subnet-id $public_subnet_id --output json)
ngw_id=$(echo -e "$ngw_response" | jq '.NatGateway.NatGatewayId' | tr -d '"')
echo "Sleeping 30 seconds to give Gateways time to initialize..."
sleep 30
# Route Tables
echo "Creating and configuring Route Tables"
rt_response=$(aws ec2 create-route-table --vpc-id $vpc_id --output json)
route_table_id=$(echo -e "$rt_response" | jq '.RouteTable.RouteTableId' | tr -d '"')
echo "Sleeping 5 seconds to give private route table time to initialize..."
sleep 5
aws ec2 create-tags --resources $route_table_id --tags "Key=Name,Value=$VPC_NAME-private-rt"
aws ec2 associate-route-table --route-table-id $route_table_id --subnet-id $private_subnet1_id > /dev/null
aws ec2 associate-route-table --route-table-id $route_table_id --subnet-id $private_subnet2_id > /dev/null
aws ec2 associate-route-table --route-table-id $route_table_id --subnet-id $private_subnet3_id > /dev/null
aws ec2 create-route --route-table-id $route_table_id --destination-cidr-block 0.0.0.0/0 --nat-gateway-id $ngw_id > /dev/null
main_rt_response=$(aws ec2 describe-route-tables --filters "[{\"Name\": \"association.main\", \"Values\": [\"true\"]},{\"Name\": \"vpc-id\", \"Values\": [\"$vpc_id\"]}]")
main_rt_id=$(echo -e "$main_rt_response" | jq '.RouteTables[0].RouteTableId' | tr -d '"')
aws ec2 create-route --route-table-id $main_rt_id --destination-cidr-block 0.0.0.0/0 --gateway-id $igw_id > /dev/null
# Route53 Endpoint Resolvers
echo "Creating Route53 Endpoint Resolvers"
security_response=$(aws ec2 create-security-group \
--group-name "$SG_NAME" \
--description "Private: $SG_NAME" \
--vpc-id "$vpc_id" --output json)
sg_id=$(echo -e "$security_response" | /usr/bin/jq '.GroupId' | tr -d '"')
aws ec2 create-tags \
--resources "$sg_id" \
--tags Key=Name,Value="$SG_NAME"
security_response2=$(aws ec2 authorize-security-group-ingress \
--group-id "$sg_id" \
--protocol udp --port 53 \
--cidr "0.0.0.0/0")
aws route53resolver create-resolver-endpoint --name "$NAME-resolver" --security-group-ids $sg_id --direction INBOUND \
--ip-addresses SubnetId=$private_subnet1_id,Ip=$RESOLVER_IP1 SubnetId=$private_subnet2_id,Ip=$RESOLVER_IP2 \
--creator-request-id `date +%m%Y%d-%H%M%S` > /dev/null
echo "Done! VPC $vpc_name created!"
echo "Use the VPC ID $vpc_id and the private subnet ids $private_subnet1_id $private_subnet2_id $private_subnet3_id" as arguments to eksctl to create your EKS Cluster in this vpc.
echo "Please ensure your VPC has the necessary network connectivity to accept the proper traffic."
EKS Cluster Config
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: redhawk-cluster
region: us-gov-west-1
vpc:
publicAccessCIDRs: ["<Your_Public_IP_Address>"]
clusterEndpoints:
publicAccess: true
privateAccess: true
subnets:
private:
us-gov-west-1a:
id: subnet-<Subnet_1_ID>
us-gov-west-1b:
id: subnet-<Subnet_2_ID>
us-gov-west-1c:
id: subnet-<Subnet_3_ID>
managedNodeGroups:
- name: eks-nodes
instanceType: t3.2xlarge
desiredCapacity: 3
privateNetworking: true
ssh:
allow: true
publicKeyName: <Your-SSH-Key-In-AWS>
maxSize: 4
minSize: 1
Configure AWS resources and networking to facilitate REDHAWK workloads
Configure REDHAWK to use the EKS cluster and launch Waveforms and Components
If you are interested in more detail about the cluster-management plugins that we’ve added to the DomainManager
or how to write one for your own cluster-management API, check out our pull-request on Github and see the core-framework/docs/container-orchestration
directory. Otherwise, check back soon for enhancements that deploy native REDHAWK waveforms via K8s without any modifications at all — no “wrapping in containers” needed.