Saturday, May 09, 2026

A Peek Inside the Conti Ransomware Gang's ContiLeaks Publicly Accessible Malware Bot Documentation - An Analysis - Part Four

Miscellaneous Automation and Testing Report

Generated: 2026-04-27

Executive summary

The material describes a collection of operational and testing specifications around a broader malware-development and validation workflow. The documents cover Tor/XMPP setup, automation of sample testing in a crypto panel, group-based virtual-machine testing, injector test automation, DPOST-style data submission, injector verification scripts, automated webmail account registration, and status tracking for encrypted sample checks.

The common thread is automation of repetitive operational work. The documents define how testers and scripts should interact with panels, virtual machines, browser-injection components, malware samples, Windows Defender, administrative backends, and reporting APIs. The emphasis is on reducing manual testing, standardizing logs, coordinating multiple virtual machines, recording pass/fail status, and tying sample execution results back into a central panel.

The overall system described here is not a single component. It is a set of supporting processes and technical requirements for validating other components: encrypted samples, bots, spreading modules, injectors, DPOST collectors, and related infrastructure.

Tor and XMPP setup

One document gives a short procedure for using the Tor Expert Bundle as a local SOCKS5 proxy. The Tor process is started locally and expected to expose a SOCKS endpoint on 127.0.0.1:9050. This proxy is then configured in an XMPP/Jabber client.

The XMPP client configuration includes:

- SOCKS5 proxy type.

- Local proxy host and port.

- Always encrypt connection.

- Allow plaintext authentication only when the connection itself is encrypted.

- Trust the certificate warning on first connection.

- Fill only the nickname/profile field after first login.

The instruction not to invent extra nicknames is consistent with the broader identity-management theme in the management documents: avoid unnecessary identity artifacts and keep account naming predictable.

Crypto-panel testing automation

Several documents describe a "crypto panel" used as a queue and status-management system for encrypted sample testing. The intended workflow resembles a controlled internal file-checking system:

- Users upload or mark samples for testing.

- Virtual machines request samples through an API.

- A sample is assigned to a VM and marked as being checked.

- The VM test script executes a defined test sequence.

- The VM returns status and logs.

- The panel records results and exposes them to operators.

Sample statuses

Encrypted samples should support the following states:

- Do not check.

- Needs checking.

- Checking.

- Checked.

"Do not check" requires a reason, such as known non-working status or a custom reason with comment. "Needs checking" can place a file at the front or back of a queue and can include whether the VM may be rolled back after testing. "Checked" stores per-VM logs, status, start time, and end time.

Virtual machine inventory

The panel should include a VM management section. Each VM has properties such as:

- Machine name.

- IP address.

- OS architecture.

- OS version.

- Active/inactive status.

When a test script connects to the panel API, it identifies itself by the machine name. The panel uses this identity to select work and track which VM is checking which file.

Panel API model

The crypto-panel API has two core operations:

getfile

The VM asks for the next priority file for its machine name. The response contains a database file ID and a base64-encoded file body. From the moment a file is issued, the machine is marked as checking that file. Repeated getfile requests from the same machine are treated as valid, assuming a possible network failure during the previous transfer.

setresult

The VM submits the result for a previously assigned file. Parameters include machine name, status, base64-encoded test log, and file ID. The panel validates that the machine was actually checking that file and that the check is not older than the permitted window. Invalid or stale submissions are ignored.

The API design uses a long random URL prefix and a shared POST password to separate automation endpoints from normal user accounts. The documents explicitly call for injection checks on these API functions.

VM rollback behavior

If testing succeeds, or if the check allows VM rollback, the panel should trigger VM rollback through a virtualization-management API. If rollback is not allowed, the VM remains inactive for new tasks until an operator manually resolves it. The UI should notify the operator and expose a manual rollback control.

Group-based virtual-machine testing

One specification extends the crypto panel for testing bot propagation across groups of virtual machines. A VM group contains one master machine and one or more slave machines.

Manual propagation test scenario

The manual test flow is:

- Start the bot on the master machine.

- Determine the bot identifier based on the machine IP.

- Use the administrative backend to start propagation-related modules.

- Wait for two connected machines in the same domain/network to report back.

Automated group test concept

The goal is to detect callbacks or reports from two other machines after propagation modules are launched. Two implementation options are discussed:

1. Master-only crypto-panel integration

Only the master VM's script talks to the crypto panel. Slave scripts report to the master by shares, FTP, or similar mechanisms. This is considered problematic because file sharing or authentication can interfere with clean test conditions.

2. Full crypto-panel integration

Each VM runs a full autotest script and talks to the crypto panel. This requires the crypto panel to understand VM grouping and master/slave relationships.

Chosen panel behavior for groups

The panel should show only the master VM when assigning work. When a task is assigned to the group master:

- All VMs in the group are started.

- The master requests and receives the sample like a normal VM.

- Slave machines do not request the sample.

- Slave machines still submit reports after a defined period.

- The panel maps slave reports to the active group task.

- The group task completes only when all machines submit status and logs.

- The final result is successful only if every machine succeeds.

- The final report concatenates per-machine logs, each preceded by the machine name.

- Timeout, shutdown, and rollback behavior mirrors ordinary VM behavior, but applies to the whole group.

Administrative backend integration

A hidden backend API call is requested to trigger a fixed set of propagation-related module commands for a bot ID. The response model is simple: success if commands were executed, not found if the bot ID is absent.

Automated encrypted-sample testing

One large specification describes testing encrypted samples against Windows Defender and then checking whether the executed client installs and reports to an administrative backend.

Environment assumptions

The expected environment includes:

- Windows 10 with current updates.

- Windows Defender as the only antivirus of interest.

- A crypto panel that stores samples.

- An administrative backend where the launched client reports activity.

- PowerShell, preferably version 2.0 for broad Windows compatibility.

Manual testing scenario

The manual process includes:

- Prepare the VM.

- Download an archive with the sample from the crypto panel.

- Check whether Defender detects the archive.

- Unpack the archive.

- Check whether Defender detects unpacked files.

- Launch the unpacked file.

- Check whether Defender detects the running file.

- Wait for installation indicators.

- Read client ID from a local file.

- Query the administrative backend for activity related to that client ID.

- Check for online activity and expected module starts.

- Check Defender again after callback/module activity.

Autotest units

Each autotest is a small routine returning true or false. The defined tests include:

- archive_static_detect.

- unarchived_static_detect.

- proactive_detect.

- client_installed.

- client_knocked.

- client_sysinfo_loaded.

- client_inject_loaded.

- client_modules_detect.

Any antivirus detection stops further testing. Later tests are marked SKIPPED, which is treated as failure for overall result calculation but distinguished from direct failure.

Backend callback verification

The administrative backend is queried by a client ID suffix derived from a local client ID file. The test checks for recent activity only. A timeout parameter is used to avoid confusing current-test activity with stale callback records from earlier test runs on the same VM.

Script configuration

The PowerShell script should expose human-editable configuration variables near the top, including:

- Crypto-panel URL and credentials.

- Administrative backend URL and credentials.

- Polling interval.

- Candidate local installation directories.

- Event-specific timeouts for archive detection, unpacked detection, proactive detection, client install, callback, module starts, and post-callback detection.

- GitLab URL, credentials, and script source URI for script auto-update.

Test loop model

Each test runs until its condition appears or its timeout expires. The script sleeps between checks. For ordinary functional tests, seeing the expected condition means success. For detection tests, the meaning is inverted: a detection event means test failure.

Windows Defender telemetry

The specification proposes reading Windows Defender Operational event logs through PowerShell to determine whether detection events occurred.

Test output

The script prints timestamped test start and result lines. It returns an OS exit code equal to the number of failed tests. It also sends the complete log and final status back to the crypto panel.

Script auto-update

Because test logic and working-directory expectations can change, the script should self-update before running. GitLab API access is proposed for authentication and downloading the current script source.

Injector test automation

The injector testing specification defines automated checks for an injector module in logged and non-logged variants, and in autonomous or bot-launched modes. The logged module expects specific config files and writes loader/core logs under a temporary directory.

Test structure

Each test is a PowerShell v2.0 function that verifies one simple condition and returns a Boolean result. Each test has a Message property containing diagnostic evidence. On success, this may be the matched log line. On failure, this may be a clear diagnostic such as an expected substring not found in a log file.

Global behavior

The script should:

- Auto-update similarly to other autotest scripts.

- Return an OS exit code equal to the number of failed tests.

- Start loader.exe before most tests.

- Stop loader.exe after each test.

- Delete logs after each test.

- Log timestamped start/result lines.

- Stop after the first non-proxy test failure.

- Treat skipped tests as failed for exit-code purposes.

Injector tests

The injector automation includes tests for:

- Static-inject proxy reachability.

- Dynamic-inject proxy reachability.

- DPOST proxy reachability.

- Loader log creation and expected startup markers.

- Core log creation and expected startup markers.

- Injection presence in Chrome, Firefox, Internet Explorer, and Edge.

- DPOST behavior for Chrome, Firefox, Internet Explorer, and Edge.

- HTTP/2 disabling behavior in Chrome, Firefox, and Internet Explorer.

Detailed browser checks

For Chrome, the script checks core log markers such as browser identification, browser version, and SSL function discovery. It also checks that Chrome remains stable and that expected command-line flags disable HTTP/2/SPDY/QUIC.

For Firefox, the script checks log markers for Firefox identification and version, process stability, and a profile preference related to HTTP/2.

For Internet Explorer, the script checks log markers for IE identification and version, process stability, and the registry setting disabling HTTP/2.

Edge support is noted but less fully specified. A separate note says Edge logging requires manual creation and correct permissions for its log file, deferred for later.

Second-stage injector testing

Future work includes:

- Automatic installation or update of Chrome and Firefox on test VMs.

- Integration with bot testing.

- Injector presence checks without parsing logs.

- Opening a controlled page in each browser and verifying injected page content through browser source inspection.

- Use of WinAPI window interaction for page/source inspection.

Manual injector verification script

An earlier injector-check document requests a command script that verifies loader health, browser injection, credential/history grab log patterns, and page-injection behavior across Firefox, Chrome, Internet Explorer, and Edge.

The script log should include:

- Current date/time.

- Machine name.

- Test start and finish lines.

- Success/failure status.

- Total tests.

- Successful tests.

- Failed tests.

- Browser version before each browser-specific test series.

English log labels are preferred because Cyrillic may not be available on all systems.

DPOST password-data submission

One specification describes a DPOST-style channel for submitting collected password records. The module receives group and client identity from parent logic and sends HTTP POST requests to a server path ending in command code /81/.

Request URI model

The URI format is:

/<group-tag>/<clientid>/81/

The group tag and client ID are inherited from parent logic. The server returns HTTP 200 with body /1/ on success, and HTTP 403 for invalid method, invalid body fields, invalid URI format, or invalid client ID format.

Body format

The body uses multipart/form-data and includes:

- source: human-readable description of the data source.

- data: long UTF-8 text containing one record per line.

Each record is represented as:

resource|username|password

The resource value depends on source type. Browser records use URLs. Other examples include account systems, operating-system profiles, mail servers, FTP endpoints, or application-specific identifiers.

Handler configuration

The module receives a dpost XML configuration through Control with ctl dpost. The config contains one or more handler entries. A handler may include an explicit http/https prefix. If no prefix is present, protocol is inferred from port parity: even ports use HTTP and odd ports use HTTPS.

Submission behavior

The sender tries handlers in order. It appends the group/client command path to the handler base URL and sends the POST. If there is no response or the status is not 200, it moves to the next handler. On success, it stops. If all handlers fail, either retry after a minute or exit; the choice is left to implementation.

Automated mail-account registration

One document specifies a command-line script for registering webmail accounts. It targets a specific webmail provider and works through randomly selected SOCKS5 proxies from a text file.

Functional requirements include:

- Use a proxy list in host:port format.

- Randomly generate plausible required identity fields.

- Support additional dictionary files for names and related values.

- Accept command-line parameters for proxy file, output account file, dictionary files, and number of accounts to register.

- Save created account credentials as email:password.

- Log actions to stdout/stderr with timestamps.

The requested logging style is verbose: start message, number of accounts, dictionary/proxy file names, selected identity data, connection attempts, HTTP status results, account creation results, and final output-file notice.

System characterization

The documents describe a support layer around a malware-oriented development pipeline. Rather than defining core implant logic, they define how to test, validate, and operationalize surrounding components:

- Use Tor-backed messaging setup.

- Queue encrypted samples for VM testing.

- Automate Windows Defender checks.

- Verify installation and callback behavior.

- Verify module start events in an admin backend.

- Coordinate multi-VM propagation tests.

- Automate browser injector validation.

- Submit DPOST-style captured data.

- Track VM-specific test status and logs.

- Register webmail accounts through proxies.

This support layer aims to reduce manual tester effort and make results reproducible. The main design pattern is a central panel plus VM scripts. The panel assigns work and stores status. The scripts run local checks, gather logs, and submit results. For multi-machine scenarios, the panel coordinates master/slave VM groups and derives success from every VM in the group.

Overall assessment

The miscellaneous documents show a mature testing and operations mindset around the broader toolchain. The emphasis is not just on building components but on continuously checking whether components survive antivirus scanning, install successfully, report to backend infrastructure, load expected modules, inject into browsers, and behave correctly across multiple VM/browser/OS combinations.

Several recurring engineering choices appear across the set:

- PowerShell v2.0 for compatibility.

- Timestamped logs.

- Simple Boolean tests with diagnostic messages.

- Exit codes based on failed-test count.

- Base64 transport for binary files and logs.

- Long random API prefixes plus POST passwords for automation APIs.

- VM rollback as part of test cleanup.

- Machine names as API identity.

- Direct attachment or submission of logs rather than transient external links.

The documents fit naturally with the prior management, injector, and CS2 reports: they describe the automation and QA layer that surrounds the bot, injector, crypto, and panel ecosystem.

No comments:

Post a Comment