Nettitude’s IR team recently had an opportunity to investigate a breach in a cloud environment. The client had recently adopted Office 365 in a hybrid configuration to host a range of Microsoft services for users, including email and SharePoint. They had seen very heavy traffic on their web application and traced the activity back to an admin user. They had seen that this user had requested a password change for the web application; the new credentials were sent to the users corporate email account, therefore the assumption was that the users corporate email account was compromised. Several other user accounts had also requested password resets in the web application around the same time as the suspect administrator account. We were asked to determine if the corporate accounts had been compromised and, if so, how.
Office 365
This was a good opportunity to investigate Office 365 installations. Some interesting discoveries were made, which will be shared in this post.
We discovered that Multi-Factor Authentication (MFA) was not enabled for the cloud environment. MFA is not enabled by default when Office 365 is deployed. In addition it is not possible to configure the lock-out policy for failed logon attempts beyond the default 10 failures.
We quickly developed a hypothesis that the impacted accounts had been brute forced. The client informed us that they had already eliminated this possibility from an analysis of the log files; there were no extensive incidents of failed logons in the time leading up to the suspected compromise. We therefore requested access to the audit logs in Office 365 in order to validate their findings. The audit log interface can be found in the Security & Compliance Centre component of the web interface.
Anyone who has had to do a live analysis of Office 365 will know that it can be a frustrating experience. The investigator is presented with a limited web interface and must configure their search criteria in that interface. Results are presented in batches of 150 logs; to view the next 150 results the investigator must pull down a slider in the web interface and wait, often for minutes, before the results are presented. You then repeat this process in order to view the next batch of 150 logs, and so on.
You will find that analysis is much quicker if you use the “export” feature to export all of the filtered audit logs to a spreadsheet. However, this will present the investigator with a new set of challenges. Firstly, you should understand that auditing, when enabled, will log a vast array of user operations, many of which will not be relevant to the investigation. Secondly, the exported logs are not very user friendly at all. Each record consists of 4 fields:
- CreationDate
- UserID
- Operations
- AuditData
There are a vast array of key-value pairs, many of which are concatenated into a single field named AuditData
. Thus an example of a single record might look something like this (much of the data has been edited to obscure traceable indicators)
2017-02-22T16:30:18.0000000Z ahaw@netitude.com FileModified {"CreationTime":"2017-05-16T16:30:18","Id":"0314a542-2b6c-4b3a-833b-08d45b4016d2","Operation":"FileModified","OrganizationId":"0314a542-2b6c-4b3a-833b-08d45b4016d2","RecordType":6,"UserKey":"i:0h.f|membership|10000000aa1a1a@live.com","UserType":0,"Version":1,"Workload":"OneDrive","ClientIP":"255.255.255.255","ObjectId":"https:\/\/internal-my.sharepoint.com\/personal\/ashaw_netti_com\/Documents\/IR Docs\/Shared - Team Reports\/this_blog_post.docx","UserId":"ashaw@netitude.com","EventSource":"SharePoint","ItemType":"File","ListId":"0a1ca110-b999-1111-aaa1-aa11b5dc50d0","ListItemUniqueId":"a11aa9a0-aaa9-1a1a-b111-faef11ce11a9","Site":"aa1aaae9-0891-4966-b3da-d0a36c24f8d3","UserAgent":"Microsoft Office Word 2013","WebId":"999999f9-a11e-1110-aa09-b111ad1dc9d9","SourceFileExtension":"docx","SiteUrl":" https:\/\/internal-my.sharepoint.com\/personal\/ashaw_netti_com\/”,” SourceFileName":" this_blog_post.docx ","SourceRelativeUrl":" personal\/ashaw_netti_com\/Documents\/IR Docs\/Shared - Team Reports\/this_blog_post.docx "}
The structure is not static across all records; the contents of the AuditData
field will change depending what user operation has been performed. Therefore there will be a varying number of key-pair fields present which makes writing a parser challenging. Fortunately, Microsoft have published both the detailed audit properties and the Office 365 management activity API schema that we can use to understand the data in the audit logs.
Log Analysis
In the absence of an existing parser, we had a requirement to quickly process these logs so that the data could be analysed and presented in an understandable format. Realising that the data was structured, albeit with variable numbers of key-value pairs, we turned to our Swiss army knife for structured data – Microsoft Log Parser and Log Parser Studio front end.
For those not familiar with this tool, it is produced by Microsoft and allows a user to execute SQL-like queries against structured data to extract fields of interest from that data. We have previously published some LogParser queries to process sysmon event logs.
We wrote some quick and dirty queries to process exported Office 365 audit data. They are by no means comprehensive, but they should be sufficient to get you started if you need to analyse Office 365 audit log data. We are therefore publishing them for the wider community in our Github repository.
Below is an example of the LogParser query that we wrote to extract Failed Logon operations from the audit logs:
SELECT RowNumber, UserIds, Operations, EXTRACT_TOKEN(AuditData,3,'"') AS CreateTime, EXTRACT_TOKEN(AuditData,7,'"') AS EventID, EXTRACT_TOKEN(AuditData,15,'"') AS OrganisationID, REPLACE_STR(SUBSTR(EXTRACT_TOKEN(AuditData,18,'"'),1,2),',','') AS RecordType, EXTRACT_TOKEN(AuditData,21,'"') AS ResultStatus, EXTRACT_TOKEN(AuditData,25,'"') AS UserKey, SUBSTR(EXTRACT_TOKEN(AuditData,28,'"'),1,1) AS UserType, SUBSTR(EXTRACT_TOKEN(AuditData,30,'"'),1,1) AS Version, EXTRACT_TOKEN(AuditData,33,'"') AS WorkLoad, EXTRACT_TOKEN(AuditData,37,'"') AS ClientIP, EXTRACT_TOKEN(AuditData,41,'"') AS ObjectID, EXTRACT_TOKEN(AuditData,45,'"') AS UserID, SUBSTR(EXTRACT_TOKEN(AuditData,48,'"'),1,1) AS ActiveDirectoryEventType, EXTRACT_TOKEN(EXTRACT_TOKEN(AuditData,1,'['),0,']') AS ExtendedProperties, EXTRACT_TOKEN(AuditData,61,'"') AS Client, EXTRACT_TOKEN(AuditData,67,'"') AS UserDomain FROM 'C:\ Audit.csv' WHERE Operations LIKE 'PasswordLogon%' AND AuditData LIKE '%failed%' ORDER BY CreateTime ASC
Analysis Results
Our initial analysis of the audit data matched the client’s findings; there was very little indication of failed logons to the impacted accounts prior to the suspected breach. However, our initial analysis was “vertical”; that is to say that it cussed on a single user account suspected of being compromised. We know from the daily investigations that we perform for our clients using our SOC managed service, that you don’t get the full picture unless you do both a vertical AND horizontal analysis. A horizontal analysis is one that encompasses all user accounts across a particular time-frame – normally one that includes the suspected time of a compromise.
We therefore re-oriented our investigation to perform a horizontal analysis. We exported all of the Office 365 audit data for all operations on all user accounts across a 30 minute time frame in the early hours of the morning of the suspected breach, when you would expect very little user activity. Our first finding was that there was significant volume of activity in the logs encompassing every single user account in the client’s estate. Once we applied our LogParser queries to the log data, it immediately became clear how the attack had occurred and succeeded. The data now showed the unmistakable fingerprint of a password spraying attack.
Password Spraying
Password spraying is a variation on traditional brute force attacks. A traditional brute force is directed against a single user account; a dictionary of potential passwords are attempted in sequence until the correct one is found or the dictionary is exhausted, in which case the attacker will configure a new account name to attack, then launch the dictionary attack on that account. However, an entry in a log file may be recorded for each failed attempt, so any organisation monitoring logs for failed attempts could detect this attack. In addition, the way to defend your organisation against such attacks is to configure a lock-out threshold on each user account, so that no further attempts to authenticate are permitted after a pre-configured number of failed attempts within a specified time frame.
Password spraying is a technique used by attackers to circumvent the previously described controls. Instead of attacking a single user account, the technique involves attacking a large number of accounts but with a potentially smaller dictionary. So if an attacker has a list of 300 user accounts and a dictionary of 2 passwords, the sequence of the attack would be:
- UserAcct1: Password1
- UserAcct2: Password1
- UserAcct3: Password1
- <snip>
- UserAcct300:Password1
- UserAcct1: Password2
- UserAcct2: Password2
- UserAcct3: Password2
- <etc>
If the attacker is smart, they will throttle the attack in order to avoid any potential lock-out time thresholds. By the time the second password attempt is attempted on any particular account, hopefully (from the attacker’s point of view), the account will have moved outside of the lock-out threshold time frame. That is what happened in our investigation; the attacker had a list of over 1000 user accounts and was throttling their attack so that although they were trying one username/password combination per second, any particular user account was only subjected to about 2 password guesses per hour. This illustrates the value of conducting both horizontal and vertical analysis.
Analysis Conclusions
Our analysis concluded that 12 accounts had been successfully compromised during the password spraying attack. The indications were that the attacker only needed a dictionary of 100 potential passwords (or less) to compromise those 12 accounts. The average number of password guesses across the compromised accounts was around 60 before compromise, while the least amount of guesses required to compromise an account was 16. It was determined that the attack had been ongoing for over 24 hours before any component of the attack was detected.
It was determined that the client was using a password policy of a minimum of 8 character passwords, which is the default password policy for Office 365.
Investigation Curiosities
It was noted, during the investigation of the Office 365 logs, that the logs were inconsistent in terms of recording successful logons. We found analysis of the logs from the Clients AzureAD installation gave a much higher-fidelity view of successful logons.
There were also anomalies in the time stamps of some of the operations recorded in the Office 365 audit logs. We determined that users were spread over a number of different time zones and concluded that they had failed to configure the correct time zone when they first logged in to their Office 365 accounts. This can have a significant negative impact on the accuracy of the audit logs in Office 365 – we therefore advise all investigators to be cognisant of this issue.
The attacker appeared to have a very accurate and comprehensive list of valid user accounts relevant to the target organisation. We concluded that that this was obtained during a previous compromise of a user account within the organisation wherein the attacker downloaded the Global Address Book from the compromised account.
Summary
To summarise, the takeaways from this investigation:
- Ensure MFA is enabled on your O365 installation.
- Educate your users about password security.
- Watch your logs; consider implementing a SIEM solution.
- Export the logs from both O365 and Azure AD during an investigation.
- Conduct a horizontal and vertical analysis of user logs for most accurate results.
- Ensure that all users configure their correct time zone when they first log in to Office365.