Final Post — Concept: Alexa Golden Images with Packer on AWS, Azure, and GCP

rav3n
8 min readMar 8, 2021

Well friends…this is likely the last post I’ll write on this account. It’s been fun but I’m going to be moving on to Crypto related stuff (smart contracts, NFTs, Blender3D, etc) and leave the cloud security & automation posts here on this account. So I’ll still be posting on Medium but different content on another account that I haven’t created yet.

I had another post that I was working on but I’m going to skip it and instead post what I was trying to achieve so I can get it off my mind. Also, I’m going to skip posting all the python and Terraform since it’s unfinished and the whole thing got rather complex (you’ll see). And I’ll be honest… I lost interest in completing it.

So what was I trying to do? Trying to use Alexa and Packer to create golden images across AWS, GCP, and Azure with AWS Inspector validation. To start, I had an Alexa skill, called Project Midas, with lambda function endpoint that went like this:

User: “Alexa, Open Project Midas”

Alexa: “What would you like to do?”

User: “Build Image”

Alexa: “What OS?”

User: “Debian” (kept it to just Debian and Ubuntu)

Alexa: For which cloud?

User: Microsoft

Note: Alexa could understand the word Azure and acronym AWS, but struggled with ‘GCP’ half the time, so I opted into using the Company names; Amazon, Microsoft, and Google for user utterances.

Alexa: Would you like an Inspector scan?

Note: The point of this part was that if you wanted to build an Azure image, you could also get an inspector scan. This would build the same image on AWS which could be validated with an inspector scan since Azure and GCP don’t offer a similar service.

Then this would take the following values and start the execution of a state machine (AWS Step Functions), passing the values to the first lambda. Values:

  • Provider: Cloud provider told to Alexa
  • Decision: Yes/No to Inspector scan question
  • Operating System: OS told to Alexa

The Step function would start with Request Parser lambda that would do the following:

  • Download the latest, preconfigured Packer build config JSON, depending on the Cloud Provider, that was stored in an S3 bucket.

If you aren’t familiar with Packer, Packer is an open source tool for creating identical machine images for multiple platforms from a single source configuration. You just need a build config (JSON), a server running Packer or a service offering Packer like SSM Packer, and the right permissions of course to build images. Packer will spin up a server, build your image, run scripts, export it to the image registry, then automatically terminate the build server. Packer is moving away from JSON though, read more here.

Here are a few of my sample files to build on the different clouds:

AWS

{
"variables": {
"aws_access_key": "",
"aws_secret_key": "",
"packer_vpc_id": "",
"packer_subnet_id": "",
"packer_security_group": "",
"ami_name": ""
},
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"type": "amazon-ebs",
"region": "us-east-1",
"source_ami": "ami-0947d2ba12ee1ff75",
"instance_type": "m4.large",
"ssh_username": "ec2-user",
"ami_name": "{{user `ami_name`}}",
"ssh_timeout": "5m",
"iam_instance_profile": "SSMAutomationPackerCF",
"vpc_id": "{{user `packer_vpc_id`}}",
"subnet_id": "{{user `packer_subnet_id`}}",
"security_group_id": "{{user `packer_security_group`}}",
"associate_public_ip_address": true
}],
"provisioners": [{
"type": "shell",
"inline": ["sudo yum update -y"]
}]
}

Azure

{
"variables": {
"client_id": "",
"client_secret": "",
"tenant_id": "",
"subscription_id": "",
"location": "",
"managed_image_name": ""
},
"builders": [{
"type": "azure-arm",
"client_id": "{{ user `client_id` }}",
"client_secret": "{{ user `client_secret` }}",
"tenant_id": "{{ user `tenant_id` }}",
"subscription_id": "{{ user `subscription_id` }}",
"managed_image_resource_group_name": "packer-azure-resource-group",
"managed_image_name": "{{ user `managed_image_name` }}",
"os_type": "Linux",
"image_publisher": "Canonical",
"image_offer": "UbuntuServer",
"image_sku": "18.04-LTS",
"image_version": "latest",
"location": "{{ user `location` }}",
"vm_size": "Standard_B2s"
}],
"provisioners": [{
"type": "shell",
"inline": ["sudo yum update -y"]
}]
}

GCP

{
"variables": {
"account_file": "",
"project_id": "",
"zone": "",
"image_name": ""
},
"builders": [{
"type": "googlecompute",
"project_id": "{{user `project_id`}}",
"source_image": "debian-9-stretch-v20200805",
"zone": "{{user `zone`}}",
"ssh_username": "packer",
"account_file": "{{ user `service_account_json`}}",
"image_name": "{{ user `image_name`}}"
}],
"provisioners": [{
"type": "shell",
"inline": ["sudo yum update -y"]
}]
}

NOTE: Important to call out here that several values in the build files are updated by in the Request Parser lambda’s environment variables set by the Terraform.

For example using AWS, for Packer to set up a temp server, it will need to know the VPC ID in the build file. So when the Terraform of this project is applied, it creates a Packer VPC, sets it as an environment variable key of packer_vpc_id with the VPC ID as its value. When Request Parser is triggered, it will update the build file, with that VPC to ensure Packer is building in the right VPC. So in the builds, anything in the variable was either set by Terraform or Request Parser. I hope that makes sense. Like a said, this got a little complex…

Request Parser then gets the latest AMI depending on the Operating System told to Alexa (Debian for this example) of the provider. So this means I needed to have the secret already set per cloud provider and the lambda needed to be able to pull that secret stored in AWS Secrets Manager. For AWS, this was a secret key and access key. For Azure, this was a client ID, client secret, tenant ID, and subscription ID. For GCP, this was a JSON dump of a service account key. All of which had permission to create an instance (Packer) to create the images.

For getting the latest AMI for AWS, you’d need to describe all the AMIs, then filter by owner and name, then get the latest in the list. Azure requires you to know the image publisher (credativ), image offer (Debian), and image sku (9). GCP requires you to list all images and filter by image family (debian-9) in the ‘debian cloud’ project.

Additionally, Request Parser would also update the JSON with useful information when it came to the user, specifically on AWS. While testing this I noticed that if I didn’t have the user for each operating system, Packer would hang cause it couldn’t log into the instance due to its default user. If you were building on a Debian image, Packer needs to know which user to use, so that’d be the admin user vs ubuntu for the Ubuntu side. Interesting cavaet.

After the function has gathered all this info, updated it, re-uploaded it, then move onto the next function (AWS Image Creator, Azure Image Creator, or GCP Image Creator) depending if you wanted an inspector scan or not (Decision). If you wanted an Inspector scan, the AWS Image Creator lambda would trigger and begin building; along with whatever else cloud provider you asked. If not, would trigger and began building using the updated build file of the cloud provider.

The Azure and GCP Image Creator lambdas would start, create the images and that was the end of that process. Of course, I wanted to do image cleanup etc, but didn’t get around to that logic.

Is this getting complicated yet?

AWS Image Creator took advantage of AWS SSM Packer ability which kind of started this whole idea of using Alexa to create images. Here is a sample of my code below of how you use this service:

ssm_client = boto3.client('ssm', region_name=region)
response = ssm_client.start_automation_execution(
DocumentName='AWS-RunPacker',
Parameters={
"TemplateFileName": <template_file>,
"TemplateS3BucketName": <bucket_name>,
"Mode": ["Build"]
}
)

Pretty simple to use, just need a build file name and a bucket where it’s uploaded.

AWS Image Creator would then wait until the execution was done, checking every 15 seconds, as shown below.

execution_id = response["AutomationExecutionId"]
status = "InProgress"
while status in ["Pending","InProgress"]:
updated = ssm_client.get_automation_execution(AutomationExecutionId=execution_id)
status = updated['AutomationExecution']['AutomationExecutionStatus']
print(status)
time.sleep(15)

When the execution was complete, a new AMI would be created and the state machine would move onto the next process; scanning it Inspector or ending based on the Decision.

Since you can’t just scan an AMI, you need to launch a host with the newly created AMI, wait for it to be running, then install the inspector agent on it. This is the first part of the Inspector Scan lambda. The second part was setting up an instance with the new AMI, creating a resource group of Assessment Targets, creating an assessment template, gathering rules, then starting the assessment. Since I was just scanning an isolated VM, I only pulled 3 of 4 of the rules packages (Common Vulnerabilities and Exposures, CIS Operating System Security Configuration Benchmarks, and Security Best Practices); skipping the Networking rules.

When the assessment started, the Inspector Scan returned the assessmentRunArn. This was useful for the next and final lambda, Inspector Results.

Since you can specify a duration of how long to run an Inspector scan, my plan was to set the State Machine to wait or sleep at least 60 minutes before continuing to the next step aka triggering Inspector Results. This function pulls the results, filtering on the HIGH findings:

findings_response = inspector_client.list_findings(
assessmentRunArns=[event['AWS']],
filter={
'severities': ['High']
}
)

With the findings, I had a python function to describe each finding:

findings_details = inspector_client.describe_findings(findingArns=findings_list, locale='EN_US')

And another python function in Inspector Scan to parse the findings_details:

def parse_findings(findings_data):
""" Parse findings into something useful """
findings_parsed =[]
for details in findings_data['findings']:
if details is not None:
details['recommendation'] = re.sub('\s+',' ', details['recommendation']).strip()
details['title'] = re.sub('\s+',' ', details['title']).strip()
details['description'] = re.sub('\s+',' ', details['description']).strip().replace('Description ', 'Description: ')
finding = {
"rule_id": details['id'].strip(),
"recommendation": details['recommendation'],
"severity": details['severity'].strip(),
"title": details['title'],
"description": details['description']
}
findings_parsed.append(finding)
return findings_parsed

My plan was to push the final list, in a nice format, to a Slack channel via webhook.

So I had the image creator lambdas down, request parsers, and the Alexa lambdas done but I needed to get the step functions part to tie it all together. It started to get extra complicated cause I was Terraforming every part of the project, including all the IAM permissions/roles, VPCs, secrets needed to build images on other clouds, all the lambdas that included third party libraries. But I’ve been wanting to study more Solidity, Haskell, and Blender, but this article has been on my mind forever so it just left me not doing nothing.

I started doing these articles cause I was at a job that wasn’t challenging me so these were my ‘tech’ fix. This isn’t the case anymore so I have less desire to do these and am looking forward to learning other things. These 22 (now 23) posts have done a tremendous job of helping me learning Cloud Security and the automation that is possible. I don’t go a week in my current job where I don’t interact with at least 3 different clouds and up to 5 (AWS, GCP, Azure, Alibaba, and Tencent). I just wanted to end by saying that I really appreciate yall sticking with me through this journey to level up my skills.

With that, thanks so much for taking the time out of your day to check out my content. I do really appreciate it. I’ll be around somewhere on the internet…Hope all is well on your side of the screen, cheers!

--

--