I was scrolling through social media the other night and came across a friend who posted a screenshot of one of his home lab devices getting ransomwared. I reached out and asked if he wanted help taking a look into what happened and he excitedly said yes! The next 7-8 hours were a blur. I was up until around 3 a.m. that night checking things out (it might have been a work night ¯\_(ツ)_/¯ ).
Come to find out, his environment was encrypted at 2:23 a.m. EST., about 3 days prior. The attacker appears to have been active for 14 minutes, dropping tools, such as Mimikatz and Lazagne, and then ultimately, launching Dever Ransomware, which included SMB scanning, persistence mechanisms and lateral movement.
This blog post will talk about the network architecture of the environment, live incident response, an interesting prefetch, timeline of the attack, info on Dever ransomware, summary and IOCs.
Network Architecture
I was dropped into a network with around 15 machines on it. The network was a flat network with one /24 for internal use and one /24 for openvpn use. The first thing I wanted to do was get a good feel for what-does-what and what’s open from the internet. The screenshot below is a screenshot from a firewall/gateway device that shows the ports forwarded from the internet.

This screenshot made me realize it was going to be a long night even though it was a fairly small network 🙂
The first thing to notice here is that RDP is being forwarded from the internet to three machines via a high port. Other noteworthy port forwards that I would want to check would be the ssh connection and probably the VNC connection.
During this fact finding process, you always want to try to get a good feel of what remote admin apps are used in the environment. While looking into this, I saw multiple remote admin tools installed on the system such as TeamViewer, ChromeRDP, and VNC. Having these tools in the environment made it more time consuming to find the initial infection vector.
Also, during this process, I wanted to get a good handle on the network logs. We attempted to find connections logs from the Controller but soon learned that unless logs were syslogged somewhere else the only way to get them was on the gateway itself. We were able to SSH into the gateway but the logs rolled over every 8 hours and didn’t keep archived copies. As I alluded above, we came to find out about the ransomware 3 days after it occurred–which means no network logs for us :/ but we still had endpoint logs, or what was left of them….
I will refer to a few of the below machines through-out the blog post.
- NAS – FreeNas (shares encrypted) – SSH open from the internet
- MGMT Workstation – Win10 – RDP open from internet
- Utility Workstation (Patient Zero) – Win10 RDP open from internet
- Guest Workstation – Win 10 – RDP open from internet
- Utility Server (encrypted) – Win 2016 Core – no connections from the internet
Live Incident Response
My friend noticed something wasn’t right when his Plex server wasn’t working quite right. A bunch of his movies were missing but he had recently made a change to Plex and thought that was the issue. Upon researching that, he found that the Utility Server had been compromised by ransomware. This server showed signs of OS files, event logs, apps, movies, games, etc., being encrypted with Dever.

It was odd that only certain files were encrypted but this was a good starting place. Since we knew this was Dever ransomware, we knew what to look for. Dever usually places info.txt and info.hta on the desktop of the user who ran the ransomware, but these files were nowhere to be found.
At this point, we didn’t know the source but knew this server was hit and figured it either got hit by someone brute forcing RDP or ssh and/or somebody clicked on something bad. We turned up other workstations on the network and dropped an agent on them to look for signs of compromise. While this was happening, I continued to look through logs to try to understand where this came from. Just to note, working with Windows Server core was something I wasn’t used to. A takeaway from this, is have scripts ready to run on systems that only have a CLI. Not having a GUI definitely slowed me down, but in the end it worked out.
Around the time we finished installing agents, I learned that all the passwords were the same across the environment. This is one of those times doing incident response when you’re like damn! I asked him to start changing passwords on all devices on the network and in doing so we found the source of the compromise.
The Utility Workstation (Patient Zero) had files on the desktop named info.txt and info.hta. Here are screenshots for those two files:


Something to note here, the attackers are using AOL email addresses. WTH?
After finding this information, our hypothesis was that this machine was compromised and the Dever ransomware was run from this machine but–how were multiple machines hit? We will touch on this question later in the in the timeline section.
Here’s a screenshot of the files on the Desktop. Oh look, Process Hacker 2 was dropped and appears to be encrypted by the ransomware.

There was an external hard drive attached to this machine. Everything on this external drive got encrypted. Here’s a screenshot

There was also another share named main, which mapped to the FreeNAS server. Ok, so we can now hypothsize that the NAS was encrypted because Dever most likely encrypted all attached shares as well as the OS. The question now, is how was the Utility Server compromised? We’ll get to that in the timeline section.
I also downloaded and ran Loki on the machine but didn’t find anything. RAM was captured for processing and Redline packages were grabbed as well.
Prefetch
Are you familiar with Prefetch? If not, check out this article by FireEye. Now that you are familiar with Prefetch, we know that if something shows up in Prefetch, it ran. I had some issues using Redline to grab the Prefetch, it only wanted to grab some of them, so I took a screenshot of the Prefetch directory. Note: all of these files were deleted.
This is one of the most interesting Prefetches I’ve seen…

So what can we learn from this? Well that timeline is super helpful so thanks for that :). We also see, mimikatz was dropped and run, probably multiple times. We also see exes name dllhost and svchost, which we have no idea what they are but due to the time table you can bet they are malicious. Right clicking the start button did not work on this machine so there was definitely other malware besides the ransomware that ran as well. It appears that an exe named ps.exe was dropped and run. Pure speculation on this, but I’ve seen psexec.exe renamed to ps.exe a few times.
Another executable to note is lazagne.exe. What could this be? Well, it only takes a minute to find this password stealing binary available on GitHub.
The LaZagne project is an open source application used to retrieve lots of passwords stored on a local computer.
https://github.com/AlessandroZ/LaZagne
Lazange can steal passwords from Windows, Linux and Macs. It can steal passwords from practically every browser created such as FireFox, Chrome, Opera, Chromium, etc. It can also steal passwords from Skype, Postgressql, Git, Outlook, Keepass, FilzeZilla, OpenVPN, OpenSSH, VNC, PuttyCM, wireless networks, autologins, etc.
Luckily, this machine wasn’t used to login to websites and didn’t have too many apps on it. It appears that no credentials were stolen from this machine.
Do you save passwords in browsers? If so, you may want to think twice about doing that…
A few of the binaries in the prefetch can be tied back to the ransomware, such as taskkill.exe, netscan.exe, wadmin.exe, bcdedit, vssadmin.exe, wmic.exe, ipconfig.exe, nslookup.exe, attrib.exe, and processhacker.exe
- taskkill and processhacker can be used to kill processes so that the ransomware can encrypt all the things.
- netscan was dropped and used to scan…for something 🙂 It appears that ipconfig and nslookup were part of this effort
- wbadmin is used to delete the backup catalog
- wbadmin delete catalog -quiet
- bcdedit can be used to prevent the system from booting into recovery mode
- bcdedit /set {default} bootstatuspolicy ignoreallfailures
- bcdedit /set {default} recoveryenabled no
- vssadmin can be used to delete shadow copies
- vssadmin delete shadows /all /quiet
- WMIC can also be used to delete shadow copies
- wmic shadowcopy delete
Most of the other items in the prefetch cannot be identified, we didn’t have enough information on them.
Timeline
Timeline in UTC – Windows Logs in EST – Redline in UTC
7:19 – mimikatz dropped and ran

7:20 – Lazagne ran but log was deleted.
7:22 – netscan.exe dropped and run

7:23 – Successful auth as admin to Utility Server

7:23 – SMB connection to Utility Server right after netscan.exe is run. Hmmmm, was this manual or automated? Could this be the spreading mechanism?

7:23 – The Utility Workstation attempting to connect to the Guest Workstation but it fails because it’s using the Utility Workstation name as the domain. Notice Logon Type 3 = network auth

7:23 – The Utility Workstation successfully connects to the Guest Workstation using the Guest Workstation name as the domain.

7:23 – The Utility Workstation (Patient Zero) connecting to the Utility Server as null.

7:23 – The Utility Workstation (Patient Zero) connecting to the Utility Server as admin, trying to connect to the E drive which is denied.

7:23 – The Utility Workstation (Patient Zero) attempting to connect to the print$ share on Guest Server but access is denied. This workstation did not show any signs of ransomware.

7:30 – All logs are cleared on the Utility Workstation (Patient Zero).

7:31 Process Hacker 2 was installed on Utility Workstation (Patient Zero)

7:33 – RDP Session has been disconnected from the Utility Workstation (Patient Zero). This username is the username that executed the Dever Ransomware. Source IP address is 5.45.71[.]178 (happy dance). This IP is out of the Netherlands (surprise surprise) owned by Serverius Holding B.V.. This IP is not in any public threat report or list to my knowledge.

7:44 – Utility Server shows a connection from the Utility Workstation (Patient Zero) that timed out. We can see that the Utility Workstation is connecting to the Utility Server over the C$ share. This log shows how/why this server got hit with the ransomware. This also makes us continue to think, was this manual or automated? Did the ransomware do that automagically? The ransomware IDs were the same for both machines, which usually means the ransomware was executed once and hit multiple machines.

16:31 – Defender turned back on and quarantined the ransomware on Utility Workstation (Patient Zero).

16:37 – Defender quarantined a few more files and persistence keys.

Dever
I won’t go into too much detail about Dever but it uses AES to encrypt the files and there aren’t any free decryptors for this strain. Dever comes from the Phobos ransomware family, which is based on Dharma AKA CrySis. Here is an in-depth article on Phobos. Dever started appearing around the end of November 2019, but didn’t show-up on Twitter until the end of December 2019. This is a pretty good write-up on Dever from January 2020 and another pretty good write-up by EnigmaSoft.
We contacted the attackers using the AOL addresses given and they asked for $5k for the decryptor. Even though my friend had TBs of data encrypted he didn’t have to have any of it. The biggest time sink about any ransomware scenario, is the time it takes to rebuild or restore the environment. We decided to attempt to talk them down to see what they would say but they ended up going silent.
Dever Analysis
Here’s a screenshot from PeStudio showing all the blacklisted strings in Dever. It’s easy to see this binary can create processes, terminate processes, execute shell code, modify registry, and process discovery.

Here is the indicators list from PeStudio – notice 35 blacklisted strings, high VT score, 46 blacklisted imports and 7 MITRE techniques.

The MITRE techniques are as follows

According to VirusTotal, this binary has a creation time of 6-19-2019 and it’s first submission is 12-29-2019.

I ran the ransomware through Any.Run but I wasn’t getting the outcome I hypothesized. Here are some things it did in the Any.Run sandbox:




Besides it doing its ransomware and persistence thing, I didn’t see any network connections, What gives? I was pretty sure this thing spread through SMB but I couldn’t reproduce it. Enter my home sandbox…
I ran it in my sandbox and immediately saw the following every 30 seconds.

This proves that Dever is scanning for SMB and then infecting the remote machine/share if accessible. I figured Dever was looking for something in the Any.Run sandbox and then wouldn’t scan for SMB but I remembered seeing a red bar in Any.Run during the run.

I’m pretty sure Any.Run’s sandbox stopped functioning before the SMB scanning started. I’m glad I ran this in my own sandbox to verify the spreading mechanism.
Summary
At the end of the day, my friend lost all of his data and he will need to rebuild the environment. Three machines in total were ransomwared being the Utility Workstation (Patient Zero), Utility Server and NAS. The logs had rolled over on the MGMT Workstation but did not show signs of encryption.
Dever uses SMB to move laterally in the environment to infect additional machines on the network. After running Dever for the first time, it continues to scan that /24 for new SMB connections. It literally scans the whole /24 for 445 every 30 seconds and then attempts to connect. When a computer comes online and has the same credential there is a high probability the machine will get ransomwared.
One question I do still have, is how did they pass multiple credentials to Dever? Mimikatz was obviously run to dump creds for lateral movement. PSExec *could* have been used for testing creds, which were then passed to Dever, but how? Dever ran as one account but encrypted another machine with a different account so I know multiple credentials were used in this attack. If anyone has information on this, please contact me.
The bad guys asked for $5k and were not willing to negotiate so they got zero. Not that we were going to pay anything and I wouldn’t recommend it because you never know if the decryptor is going to work or if you’ll even get one. Some big files or databases can be left in a corrupt state even after using the decryptor so nothing is guaranteed. In all, this Dever Ransomware Experience took ~20 hours to do live IR, timeline and write-up. This doesn’t account for my friends time as it will take him hours and hours to rebuild what we had.
Recommendations
- Do not use single factor remote admin tools over the internet such as RDP, TeamViewer, SSH, etc. If you need to use these tools over the internet use a VPN or if you must, make sure you are using two factor authentication.
- Do not allow machine to machine communication over SMB unless absolutely necessary.
- Don’t save passwords in browsers
- Use different privileged passwords on every endpoint.
- Centralize logs when possible
- Offline backups are the bee’s knees
- Security through obscurity does not stop a persistent adversary
- Monitor for security tools being turned off such as Defender
- A ton more but start at #1 and work down.
IOCs
MISP Priv 65012 / UUID
5e471206-3fb8-43d3-adfd-4806950d210f
Commands run by Dever
wbadmin delete catalog -quiet
bcdedit /set {default} bootstatuspolicy ignoreallfailures
bcdedit /set {default} recoveryenabled no
vssadmin delete shadows /all /quiet
wmic shadowcopy delete
netsh firewall set opmode mode=disable
netsh advfirewall set currentprofile state off
svhost.exe scanning /24 for 445/tcp
Persistence locations can be found above or in MISP