A while back, I wondered how hard it would be to capture the memory of an instance using AWS services and Alexa, then I realized…wait a sec I could probably figure that out myself with some effort.
For this lengthy walk through, I used the following components during this walk through:
- S3 to store our executable and memory dump
- A 2016 Windows host + IAM Role w/ access S3 and SSM
- Systems Manager to check the agent health
- Lambda function+ necessary IAM permissions to execute lambda function
- Alexa Developer Portal to set up the skill
As usual, Github Project Repo here for this tutorial. And…off we go!
P.S. I added a summary at the end of the steps since I know there’s a lot to unpack here. Thank you for the feedback good sir (you know who you are).
I set up a S3 bucket (save your bucket name for later) with default settings that had two folders; one for the tools and one to store evidence (the raw memory dump). The tools folder contained a Windows memory capturing tools from Google’s Rekall repository called winpmem.
The S3 folder structure:
— — winpmem_1.6.2.exe
IAM Instance Role
To send SSM Commands and upload/download from s3, I created a role using the AmazonEC2RoleforSSM policy. This role is to be attached to the Window host created next.
And since I was already in IAM, I also created the policy needed for the future lambda function. I added the policy (JSON) to this project’s repo for a quick copy + paste. Here is what the policy (to be attach to the Lambda function later) looks like:
I decided to go with a 2016 Windows base instance since I knew the Volatility profile existed already.
I did see AWS has released some 2019 Windows base image?
I clicked through all the default settings except:
Tagging: I’m going to call this instance Tim as a shout out to my buddy who helped rekindle my interest in digital forensics. Plus we get to say “Capture Tim” to Alexa.
I opened the Remote Desktop port to My IP. Please set this to your IP and NOT 0.0.0.0/0 please.
After creation, I attached the IAM Role created earlier. I gave the server about 5–10 minutes (and you have to wait anyways to log into Windows hosts after initial launch anyways) to boot/setup/connect. Then, I logged into this server and installed the awscli.
I just went to this URL using Internet Explorer (be prepare to battle all the security pop-ups), downloaded the MSI and installed like any other Windows software.
I opened Powershell and verified that the AWS CLI was installed:
Next, I launched Calculator and minimized the RDP window.
To make this tutorial possible, I needed to ensure the agent was update and healthy. In the past, I use to log into the instance and reinstall the agent. Now I’ve switched to System Manager > Managed Instances to view my current instances. Here is before the Run command:
Notice the agent version is blank…
Using the Run Command and selecting the AmazonInspector-ManageAWSAgent document.
Side note: Has anyone had success running the AWS-UpdateSSMAgent Document to get the agent healthy and displaying their version? My default was always to reinstall the agent and move on.
I manually selected the Tim instance.
I disabled any outputs for this — up to you if you want to do the same.
Then I ran the Run command and switched back to check the Agent health.
See how the Agent Version is listed now?
I logged into the Alexa Developer Portal, created a custom skill from scratch. You can drag/drop the JSON (check the this project’s github repo) and pointed Alexa to an endpoint (our future lambda).
You’ll take the Skill ID listed here and plug it into the Lambda function (coming up next!).
I opened a new tab and created a new lambda function from scratch. To create lambda package for upload, I did the following steps:
virtualenv --python=/usr/local/bin/python3.7 .
pip install boto3
pip install ask-sdk
# copy + paste + save code from github repo found here:
zip -r9 lambda_package.zip .
Of course, I suggest reading the code and understanding what it’s doing. Heck, make improvements?
After the zip was created, I uploaded it. Don’t forget to ensure the handler matches the python file name.
If you don’t have the existing role lambda_basic_execution, you can create it by clicking Custom Role.
To give the function more time to process the request, I increased the timeout from 3 seconds to 1 minutes. Adjust as necessary. To specify the bucket for uploading, I set the environment variable here.
You’ll see in the python code to reference this environment variable when running.
# line 24
bucket = os.environ[‘Bucket’]
Next, I added an Alexa Skills kit. I copy/pasted the Alexa’s Skill ID from earlier here and the ARN (Amazon Resource Name) of the Lambda function to the default endpoint location under the Alexa Developer portal. You’ll then save and build your model.
If this is slightly confusing and you’d like more screenshots examples, I have a few more detailed examples in my previous Alexa articles.
Bringing all that hard work together
After everything has been setup, I jumped over the Alexa console to test. I opened the forensics skill and asked Alexa to ‘capture Tim’.
Switching back to the Windows instance, I watched the files be downloaded, and executable ran.
After the memory dump (1 GB file) was created, it was uploaded to S3 within 2 minutes.
I downloaded the file to my Macbook to test with Volatility — an open-source memory forensics framework. I ran imageinfo to get a profile suggestion to begin analyzing.
Using one of the profiles, I was able to grep out the calculator app launched earlier (and even the one I launched last night since I left the server on).
Fair warning: I’m not an expert but forensics has always been intriguing to me. I hope someone finds this proof of concept useful in some way.
TL; DR the whole thing, summarize?
Since there a lot going on in this post, I’m taking a suggestion to do a little summarizing. Here is a quick summary of the steps:
- I set up an S3 bucket containing the executable and some folder to store evidence (raw memory dump)
- Create two roles; one to attach to our ec2 instance it can be managed System Manager and upload/download files from S3 and the other role allows our Lambda function to execute SSM commands (which perform the memory capture and upload to S3)
- Set up a Windows host, named it Tim, attached role (mentioned in the last bulletin), logged in to install AWSCLI, and launch the calculator app.
- Switched over to System’s Manager to force the agent to phone home and update. Notice how we didn’t know the agent version until after the run command was pushed?
- Create a custom lambda function, set the timeout and environment variable, uploaded to code, and configured the Alexa’s skill kit.
- For the Alexa skill, I imported the JSON from this project’s Github and pasted the ARN of the Lambda function. Save and build model.
- After that, I tested the skill and was able to download a raw memory dump from S3. Using volatility, I could see the calculator app.
Hope this article has been helpful and thank you for taking the time to read this lengthy post.