In the prevailing corporate narrative, Artificial Intelligence is the ultimate productivity booster. But that same technology is being simultaneously weaponized as a digital super-weapon. We have entered an era where the world moves too fast for manual oversight—and sophisticated cybercrime is no longer the exclusive domain of elite state actors. It has been democratized, industrialized, and turned into a subscription service.
The contemporary intelligence landscape is undergoing a radical transformation driven by the proliferation of specialized, often uncensored, Large Language Models and automated reconnaissance frameworks. Tools such as DarkGPT and services under the “Cheating-as-a-Service” umbrella represent a significant shift in how both offensive actors and defensive researchers approach Open-Source Intelligence (OSINT) and Enhanced Due Diligence (EDD).
These instruments leverage the analytical power of models like GPT-4-200K to process vast quantities of unstructured data from leaked databases, social platforms, and corporate registries. What was once the sole domain of well-funded intelligence agencies is now accessible via subscription-based models or open-source GitHub repositories.
The same technologies that drive innovation can be weaponized for criminal purposes. The distinction between “White-Hat” and “Black-Hat” tools is increasingly narrow, defined largely by the user’s intent.
The global OSINT market is experiencing significant growth—size estimates range from $4.6 billion to $14.85 billion in 2024, with projections showing a CAGR of up to 28.2% through 2033. This market explosion is driven by cybersecurity needs, national security imperatives, and a new generation of investigators who cut their teeth on openly available data.
DarkGPT: The Bridge to Leaked Data
DarkGPT is not a newly trained model. It is a specialized, “jailbroken” variant of GPT-4-200K that acts as an analytical bridge between raw leaked data and actionable intelligence. It integrates directly with the Dehashed API to allow users to query massive databases of leaked credentials using plain English.
By simply asking, “What are the leaked passwords associated with this domain?”, a researcher—or an attacker—can instantly synthesize millions of rows of breached data into a targeted intelligence report. The tool automates repetitive OSINT tasks that would otherwise take analysts hours or days of manual database querying.
DarkGPT Technical Profile
- Model Architecture: Based on GPT-4-200K for high-context data analysis
- Database Integration: Connects with the Dehashed API to query breached data
- Querying Style: Natural language queries (e.g., “What emails are associated with domain example.com?”)
- Interface: Command-Line Interface (CLI) for rapid operational use
- Primary Use Case: OSINT, credential stuffing research, and database leak analysis
- Repository: github.com/luijait/DarkGPT
Installation and Setup for Professionals
To deploy DarkGPT in a research or defensive environment, practitioners need Python 3.8+, Git, a Dehashed API key, and an OpenAI API key. The minimum recommended hardware is a quad-core processor with 16 GB RAM.
git clone https://github.com/luijait/DarkGPT.git
cd DarkGPT
pip install -r requirements.txt
cp .env.example .env
python3 main.py
The Underground Market for Malicious AI
DarkGPT exists within a broader ecosystem of “Dark AI” tools that have seen a 200% increase in market mentions in 2024. Operating under an AI-as-a-Service model, these uncensored systems sell subscriptions from $60 to $700 per month, making sophisticated cyberattacks affordable for novice criminals.
The threat actors don’t build models from scratch. Instead, they adapt powerful, often open-source models by removing ethical guardrails through prompt injection, fine-tuning on malicious data, or constructing wrapper applications that bypass safety filters. These jailbroken services are sold on Telegram channels and dark web forums, sometimes for as little as €60 per month.
The world loses approximately $19.9 million per minute to cybercrime. This translates to roughly $1.2 billion every hour. The clock is no longer just a measure of time—it is a measure of loss.
Cheating-as-a-Service: The Erosion of Digital Trust
A major development in the 2025 landscape is “Cheating-as-a-Service” (CaaS)—a grey market of tools that blur the lines between human augmentation and unethical advantage. The flagship product is Cluely, an “invisible AI assistant” designed to be completely undetectable during meetings, calls, and exams.
Cluely analyzes both on-screen visual data and audio in real-time, providing answers during coding interviews, high-stakes exams, and critical sales calls. Its founders, Roy Lee and Neel Shanmugam (both 21), were expelled from Columbia University for creating the tool’s predecessor. Despite this notoriety, Cluely raised $15 million from Andreessen Horowitz in June 2025, reaching a $120 million valuation.
Cluely at a Glance
- Platform: macOS (10.15+), Windows 11, iOS (15.1+)
- Undetectability: Built to be “completely invisible” to conferencing platforms
- Business Model: Freemium — Free basic, $20/mo or $100/year Pro
- Revenue: ~$3M annually (reported)
- Funding: $5.3M seed (Apr 2025) → $15M Series A by a16z (Jun 2025)
- Valuation: $120 million
- Roadmap: Hardware integration (smart glasses)
- Links: cluely.com · Quickstart · Download · App Store
The tool’s explosive success points to a broader cultural shift that researchers describe as “financial nihilism”—where efficiency and outcome are prioritized over traditional ethics. In an EDD context, this poses a significant risk: corporate executives and technical candidates can now project competencies they don’t actually possess.
The Counter-Market Emerges
Cluely’s rise has triggered an immune response. Startups like Validia (“Truely”) and Proctara are developing detection tools for AI-assisted cheating. Universities are shifting from prohibition to managed integration, with 89% of students reportedly using AI tools by 2025.
The cycle: provocative AI product → product-market fit → top-tier VC funding → defensive counter-market → institutional policy adaptation. This pattern will repeat across industries.
The Ferrari Deepfake Case
In July 2024, a Ferrari executive narrowly escaped a deepfake scam where an attacker used AI voice cloning to mimic CEO Benedetto Vigna’s Southern Italian accent in a WhatsApp call, requesting urgent funds for a “confidential acquisition.” The executive only detected the fraud by asking a personal question the AI couldn’t answer.
AI-Enhanced Attack Vectors vs. Defensive Responses
The application of DarkGPT and CaaS tools in Enhanced Due Diligence responds to the “wobbly façade” of traditional banking compliance. Organizations now face threats like autonomous ransomware that analyzes exfiltrated financial data to calculate ransom based on the victim’s liquidity.
OffensiveHyper-Personalized Phishing
AI reduces personalized phishing costs by 95%. Criminals mimic executive writing styles and deploy voice cloning for deepfake calls.
DefensiveSynthetic Text Detection
Training detection models against AI-generated text, identifying patterns that distinguish machine-crafted from genuine messages.
OffensiveSynthetic Identities & Deepfakes
Generating synthetic IDs and deepfakes to bypass biometric checks and KYC. Tools like DeepFaceLab increasingly fool verification.
DefensiveRed Team Toolkit
Open-source tools simulating Dark Web AI attacks for defensive testing. Standardized threat schemas and deepfake classifier benchmarks.
OffensiveAutomated Reconnaissance
DarkGPT automates credential stuffing and target network mapping, moving from discovery to exploitation at machine speed.
DefensiveAI-Powered SOC Monitoring
LLM agents for autonomous threat response—reducing response from 3–7 hours (manual) to 1–2 minutes (AI-driven).
The “Agentic AI” Threat
“Agentic AI” systems autonomously execute multi-stage attacks. They perform reconnaissance, harvest credentials, penetrate networks, and generate psychologically targeted ransom notes from exfiltrated data—all without human intervention.
Case Study: Jan Marsalek & the Wirecard Investigation
Investigators Christo Grozev and Roman Dobrokhotov used OSINT “follow the money” approaches to uncover fugitive Wirecard COO Jan Marsalek’s ties to Russian intelligence. This case demonstrates the profound transformation of investigative journalism through digital sleuthing. Listen on Spotify.
Strategic Defense Framework for 2025
Blockchain for Data Provenance
Immutable ledger to track data origins and prevent “data poisoning” during AI model training.
Federated Learning
Distributed model training without centralizing raw data, reducing breach “blast radius.”
Zero-Trust Architecture (ZTA)
“Never trust, always verify.” Continuous authentication with AI-monitored behavioral analysis.
First-Order Logic (FOL) for Compliance
Deterministic, machine-checkable proofs for mathematically verifiable compliance without “black-box” AI.
Self-Sovereign Identity (SSI)
Verified credentials in private digital wallets. “Verify once, use everywhere” via identity.global and Nansen.ID.
The “ShmagunGPT” Framework
The “ShmagunGPT” concept digitizes the investigative expertise of top journalists like Olesya Shmagun—“understanding the origin of money” and “spotting anomalies in behavior.” This methodology, from Navalny’s anti-corruption team, was built on open data. Read more: From Dissident to Detective.
The most important qualities for modern OSINT aren’t a special degree or clearance level. They are patience and attentiveness—because the data is already out there, waiting to be found.
Conclusion: Navigating the 2025 Intelligence Landscape
The dual-use nature of specialized AI instruments represents the defining challenge for corporate security in 2025. The future of EDD lies in self-sovereign identity and deterministic compliance based on First-Order Logic. Trust must be cryptographically proven and constantly re-verified against AI-powered deception.
All Source Links & References