Monday, February 27, 2006
The M4 Project is an effort to break 3 original Enigma messages with the help of distributed computing. The signals were intercepted in the North Atlantic in 1942 and are believed to be unbroken. Ralph Erskine has presented the intercepts in a letter to the journal Cryptologia. The signals were presumably enciphered with the four rotor Enigma M4 - hence the name of the project.
This project has officially started as of January 9th, 2006. You can help out by donating idle time of your computer to the project. If you want to participate, please follow the client install instructions for your operating system:
Unix Client Install
Win98 Client Install
Win2000 Client Install
WinXP Home Client Install
WinXP Pro Client Install
The first message is already broken as a matter of fact, and looks like that :
Deciphered and in plain text :
From Looks:Radio signal 1132/19 contents:Forced to submerge during attack, depth charges. Last enemy location08:30h, Marqu AJ 9863, 220 degrees, 8 nautical miles, (I am) following(the enemy). (Barometer) falls (by) 14 Millibar, NNO 4, visibility 10.
You no longer need the NSA to assist in here, still they sure have contributed a lot while "Eavesdropping on Hell", didn't they?
Distributed Computing is a powerful way to solve complex tasks, or at least put the PC power of the masses in use. It's no longer required to hire processing power on demand from any of these jewels, but download a client, start participating, or find a way to motivate your future participants. In my previous post "The current state of IP spoofing" I commented on the ANA Spoofer Project and featured a great deal of other distributed projects. Meanwhile, the StartdustAThome project also started gaining grounds, so is it ETs, Space dust, global IP spoofing susceptibility, or unbroken Nazi's ciphers - you have the choice where to participate!
Technorati tags :
Security, Cryptography, Enigma, Distributed
Saturday, February 25, 2006
Get it, find out more, and listen to the wisdom from previous episodes.
Friday, February 24, 2006
Recently, an OS X trojan appeared, second(nice attitude from Apple on embracing the inevitable!), one followed, and besides "worming" a vulnerability and experimenting with propagation methods, I don't really think it's the big trend everyone is waiting for, a standard POC(Cabir), whose core function would empower a generation of variants for years to come.
I just came across this from Trifinite's blog :
"Trifinite.group member Kevin has published a paper detailing the techniques he used in the development of the InqTana Bluetooth worm that targets vulnerable Mac OS X systems. There has been significant confusion surrounding this worm, so here are some salient points:
- The concurrent release of the OS X Leap.A and InqTana.A worms is coincidental
- There is no conspiracy, AV vendors and Apple were notified about Kevin's progress in developing this worm in advance of making details publicly available
- Both 10.3 and 10.4 systems are vulnerable until patched with APPLE-SA-2005-05-03 and APPLE-SA-2005-06-08
- InqTana prompts before infecting *by design*, Kevin was just trying to be nice, but the worm could easily spread silently
Kevin's paper is available at http://www.digitalmunition.com/InqTanaThroughTheEyes.txt. Comments can be directed to the BlueTraq mailing list. Our sympathies to those organizations who were affected by the false-positive signatures published by overzealous AV companies."
It clarifies a lot I think, mostly that, while architecture and OS popularity have a lot to do with security and incentives for attacks, "InqTana.A itself has absolutely nothing to do with Leap.A. My work was done completely independent of the author of Leap. The day after I sent out queries to the AV companies about my code I was shocked to see another OSX worm had already been in the news. While my worm sat in the mail spools of several AV companies they were busy writing about the "First Trojan/Worm for OSX"."
Leakage of IP, or I'm being a paranoid in here? Wired also has some nice comments.
Technorati tags :
Security, Information Security, Apple, Malware, Leap, InqTana, Anti Virus
"Researcher Matthew Aid has discovered a secret reclassification program that has moved thousands of declassified pages out of the National Archives and Records Administration's facility in Maryland. Some groups, such as George Washington University's Nation Security Archive, are fighting to end the program, arguing that the government has no right take back information it has published. The reclassification has been ongoing since 1999 as the Central Intelligence Agency, the Defense Intelligence Agency, and the Defense and Justice departments take back information they say had been inadvertently published. The National Security Archive describes some of the documents that have been reclassified as uninteresting and mundane."
And from The National Security Archive :
"Washington, D.C., February 21, 2006 - The CIA and other federal agencies have secretly reclassified over 55,000 pages of records taken from the open shelves at the National Archives and Records Administration (NARA), according to a report published today on the World Wide Web by the National Security Archive at George Washington University."
OSINT has greatly evolved from President Nixon's remark in respect to the CIA “What use are they? They’ve got over 40,000 people over there reading newspapers.”, whereas Secrecy is a major weakness to the national security of a country in a very complex way. I feel that sometimes, you need the average citizen's unbiased opinion on a major issue, but I guess I'm not into politics, just figuring out what is going on at the bottom line!
More on Secrecy, Intelligence, Misc :
Making Intelligence Accountable
Why Spy? The Uses and Misuses of Intelligence (1996)
Intelligence Analysis for Internet Security : Ideas, Barriers and Possibilities
U.S. Electronic Espionage : A Memoir
Terrorism prevention in Russia : one year after Beslan
Crypto Law Survey
Project on Government Secrecy
Shhh!!: Keeping Current on Government Secrecy
Technorati tags :
I recently came across a well researched report giving a very in-depth overview and summary of important concepts related to Botnets. Recommended bed time reading, and here's an excerpt :
"In this paper we begin the process of codifying the capabilities of malware by dissecting four widely-used Internet Relay Chat (IRC) botnet codebases. Each codebase is classified along seven key dimensions including botnet control mechanisms, host control mechanisms, propagation mechanisms, exploits, delivery mechanisms, obfuscation and deception mechanisms. Our study reveals the complexity of botnet software, and we discusses implications for defense strategies based on our analysis"
Some of the findings that I also came across in my "Malware - future trends" search worth mentioning are :
- "The overall architecture and implementation of botnets is complex, and is evolving toward the use of common software engineering techniques such as modularity." Namely, no one is interested in reinventing the wheel again, and the Simple Botnet/Malware Communication Protocol I've once mentioned (originally came across the concept here) could give the malware scene an impressive scale, but could it also put AV vendors and researchers in favorauble position where exploiting protocol weaknesses is more beneficial than current approaches?
- "Shell encoding and packing mechanisms that can enable attacks to circumvent defensive systems are common. However, Agobot is the only botnet codebase that includes support for (limited) polymorphism"
Smart! Mainly because of the fact that "The malware delivery mechanisms used by botnets have implications for network intrusion detection and prevention signatures. In particular, NIDS/NIPS benefit from knowledge of commonly used shell codes and ability to perform simple decoding. If the separation of exploit and delivery becomes more widely adopted in bot code (as we anticipate it will), it suggests that NIDS could benefit greatly by incorporating rules that can detect follow-up connection attempts."
-"All botnets include a variety of sophisticated mechanisms for avoiding detection (e.g., by anti-virus software) once installed on a host system."
Retention instead of acquisition of new zombies would tend to dominate from my point of view. Patching the hosts themselves, hiding presence, dealing with the easy to detect idle zombie's presence, TCP obfuscations, tests for debuggers, are among the current methods used.
Botnets will continue to dominate due to their concept and potential for growth, and while monitoring and doing active research is still feasible, encrypted communications as a logical development should also be researched as a concept, but how many *public* IRC servers, if such are used, support SSL encryption?
Technorati tags :
Security, Information Security, Malware, Botnets
"House Government Reform Committee Chairman Tom Davis (R-VA) has criticized Google for refusing to hand search records over to the US Justice Department while cooperating with China in censoring certain topics. Justice sought the records to bolster its case against a challenge to online anti-pornography laws, but Google refuses to submit the records on privacy grounds. Davis does not expect a standoff between Google and the government, but hopes an agreement can be reached, allowing Google to supply the records without frightening users that their searches may be examined."
and in case you're interested, some of my comments, :
"Is it just me or that must be sort of a black humour political blackmail given the situation?! First, and most of all, the idea of using search engines to bolster the online anti-pornography laws created enough debate for years of commentaries and news stories, and was wrong from the very beginning. Even if Google provide the data requested it doesn’t necessarily solve the problem, so instead of blowing the whistle without any point, sample the top 100 portals and see how they enforce these policies, if they do. As far as China is concerned, or actually used as a point of discussion, remember the different between modern communism, and democracy as a concept, the first is an excuse for the second, still, I feel it’s one thing to censor, another to report actual activity to law enforcement. I feel alternative methods should be used, and porn “to go” is a more realistic threat to minors than the Net is to a certain extend, yet the Net remains the king of content as always."
Google indeed issued a statement, sort of excusing the censorship under the statement of "the time has come to open ourselves to the Chinese market", and while their intentions make business sense, the outbreak had very positive consequences from my point of view - build more awareness and have the world's eyes on the Chinese enforcement of censorship practices, but is it just China to blame given "Western" countries do censor as well, or is it China's huge ambitions of maintaining a modern communism in the 21st century that seem to be the root of the problem?
In an article "A day in the life of a Chinese Internet Police Officer" I read some time ago, you can clearly see the motivation, but also come across the facts themselves : you cannot easily censor such a huge Internet population, instead, guidance instead of blocking, and self-regulation(that is limiting yourself with fear of prosecution) seem to be the current practice, besides jailing journalists! And while sometimes, you really need to come up with a creative topic worth writing about, free speech is among the most important human rights at the bottom line.
Chris Smith, Chairman of the House subcommittee that oversees Global Human Rights, proposed a discussion draft "The Global Online Freedom Act of 2006" "to promote freedom of expression on the internet [and] to protect United States businesses from coercion to participate in repression by authoritarian foreign governments". It is so "surprising" to find out that they are so interested in locating cyber-dissidents : "U.S. search engine providers must transparently share with the U.S. Office of Global Internet freedom details of terms or parameters submitted by Internet-restricting countries." exactly the same way I mentioned in my previous "Anonymity or Privacy on the Internet?" post.
Meanwhile, the OpenNetInitiative also released a bulletin analyzing Chinese non-commercial website registration regulation, giving even further details on the recent "you're being watched" culture that tries to cost-effectively deal with the issue of self-regulation :
"In a report published last year, “Internet Filtering in China: 2004-2005,” ONI shared its research findings that China’s filtering regime is the most extensive, technologically sophisticated, and broad-reaching Internet filtering system in the world. This new regulation does not rely on sophisticated filtering technology, but uses the threat of surveillance and legal sanction to pressure bloggers and website owners into self-censorship. While savvy website owners might thwart the registration requirement with relative ease, the regulation puts the vast majority of Chinese Internet users on notice that their online behaviour is being monitored and adds another layer of control to China’s already expansive and successful Internet filtering regime."
Yet another recent research I came across is a university study that finds out that "60% Oppose Search Engines Storing Search Behaviours", you can also consider the "alternatives" if you're interested :) A lots to happen for sure, but it is my opinion that personalized search is the worst privacy time bomb a leading search engine should not be responsible for, besides open-topic data retention policies and not communicating an event such as the DoJ's one, but complying with it right away, bad Yahoo!, bad MSN!
At the bottom line, Google's notifications of censored content(as of March, 2005 only, excluding the period before!), the general public's common sense on easily evaluating what's blocked and what isn't, and the powerful digital rights fighting organizations that simultaneously increased their efforts to gain the maximum out of the momentum seemed to have done a great job of building awareness on the problem. Still, having to live with the booming wanna be "free market" Chinese economy, and the country's steadily climbing position as a major economic partner, economic sanctions, quotas, or real-life scenarios would remain science fiction.
Technorati tags :
Privacy, Anonymity, Censorship, China, Search Engine
Friday, February 17, 2006
Consider going through some of my previous thoughts on the emerging market for software/0day vulnerabilities as well and stay tuned for another recent discovery a dude tipped me on, thanks as a matter of fact!
Initiatives such as The Lone Gunmen, the X-files, and The Outer Limits have already proven useful, given someone listens! For instance :
"In a foreshadowing of the September 11, 2001 attacks, subsequent conspiracy theories, and the 2003 invasion of Iraq, the plot of the March 4, 2001 pilot episode of the series depicts a secret U.S. government agency plotting to crash a Boeing 727 into the World Trade Center via remote control for the purpose of increasing the military defence budget and blaming the attack on foreign "tin-pot dictators" who are "begging to be smart-bombed." This episode aired in Australia less than two weeks before the 9/11 attacks, on August 30."
Conspiracy theorists do have a lot to say, so don't ignore them, find the balance, and enjoy the series :)
You can also browse through some transcripts as well.
Technorati tags :
"US investment bank Morgan Stanley will offer a settlement to the Securities and Exchange Commission (SEC), agreeing in principle to pay a $15 million fine for failing to preserve e-mail messages. The e-mail messages could have provided useful evidence in several cases brought against the company. In one case, resulting in a $1.58 billion judgement against the bank, a judge turned the burden of proof on Morgan Stanley after learning they had deleted e-mails related to the case. However, Morgan Stanley has not yet presented the offer to the SEC nor is there a guarantee the SEC will accept. The investment bank says it is fixing the problems that led to the erasure and is pleading for leniency."
He, He, He!
You see, the email archiving market is about to top $310M for 2005 according to the IDC, still one of the world's most powerful investment banks cannot seem to be able to comply with the requirements. Lack of financial power - nope, lack of incentives - yep! The case reminds me of KPMG's tax shelters, McAfee's fine for accounting scam between 1998-2000, and the "Smoking Emails" Admissible In $1 Billion Enron-Related Chase Case"
Quit smoking emails, and take advantage of MailArchiva - Open Source Email Archiving and Compliance.
Techorati tags :
smoking gun, investment banking, compliance, mailarchiva
Thursday, February 16, 2006
Bill Gates did a commentary on the issue, note where, at the RSA Conference, perhaps the company that's most actively building awareness on the potential/need for two-factor authentication, or anything else but using static passwords for various access control purposes. Moreover, it was again Bill Gates who wanted to integrate the Belgian eID card with MSN Messenger (Anonymity or Privacy on the Internet?) Microsoft are always reinventing the wheel, be it with antivirus, or their Passport service, and while they have the financial obligations to any of their stakeholders, I feel it's a wrong approach on the majority of occasions.
What I wonder is, are they forgetting the fact that over 95% of the PCs out there, run Microsoft Windows, and not Vista, and how many would continue to do so polluting the Internet at the bottom line. My point is that MS's constant rush towards "the next big thing" doesn't actually provides them with the resources to tackle some of the current problems, at least in a timely manner. What do you think? What could Microsoft do to actually influence the acceptance of two-factor authentication, and moreover, how feasible is the concept at the bottom line?
Technorati tags :
security, microsoft, authentication, passwords
Wednesday, February 15, 2006
"This document outlines suggested steps for determining whether your Windows system has been compromised. System administrators can use this information to look for several types of break-ins. We also encourage you to review all sections of this document and modify your systems to address potential weaknesses."
I find it a well summarized checklist, perhaps the first thing that I looked up when going through it was the rootkits section given the topic. It does provide links to free tools, but I feel they could have extended to topic a little bit. Overall, consider going through it. Another checklist I recently came across is the "11 things to do after a hack" and another quick summary on "10 threats you probably didn't make plans for".
Rootkits are gaining popularity, and with a reason -- it takes more efforts to infect new victims instead of keeping the current ones, at least from the way I see it. In one of my previous post "Personal Data Security Breaches - 2000/2005" I mentioned about a rootkit placed on a server at the University of Connecticut on October 26, 2003, but wasn't detected until July 20, 2005, enough for auditing, detecting attackers and forensics? Well, not exactly, still something else worth mentioning is the interaction between auditing, rootkits and forensics. There's also been another reported event of using rootkit technologies for DRM(Digital Right Management) purposes, not on CDs, but DVDs this time, so it's not enough that malware authors are utilizing the rootkit concept, but flawed approaches from companies where we purchase our CDs and DVDs from, are resulting in more threats to deal with!
Check CERT's "Windows Intruder Detection Checklist" and if interested, also go though the following resources on rootkits and digital forensics :
Windows rootkits of 2005, part one
Windows rootkits of 2005, part two
Windows rootkits of 2005, part three
Malware Profiling and Rootkit Detection on Windows
Shadow Walker - Raising The Bar For Windows Rootkit Detection - slides
When Malware Meets Rootkits
Leave no trace - book excerpt
Rootkits and how to combat them
Rootkits Analysis and Detection
Concepts for the Stealth Windows Rootkit
Avoiding Windows Rootkit Detection
Checking Microsoft Windows Systems for Signs of Compromise
Implementing and Detecting Implementing and Detecting an ACPI BIOS Rootkit
Host-based Intrusion Detection Systems
Forensics Tools and Processes for Windows XP Clients
F.I.R.E - Forensic and Incident Response Environment Bootable CD
Forensic Acquisition Utilities
FCCU GNU/Linux Forensic Bootable CD 10.0
iPod Forensics :)
Forensics of a Windows system
First Responders Guide to Computer Forensics
Computer Forensics for Lawyers
security, information security, forensics, rootkit, security breach, CERT
Tuesday, February 14, 2006
"A group of graduates from the Massachusetts Institute of Technology (MIT) aim to change that by crawling the Web with hundreds, and soon thousands, of virtual computers that detect which Web sites attempt to download software to a visitor's computer and whether giving out an e-mail address during registration can lead to an avalanche of spam.
The goal is to create a service that lets the average Internet user know what a Web site actually does with any information collected or what a download will do to a computer, Tom Pinckney, vice president of engineering and co-founder of the start-up SiteAdvisor, said during a presentation at the CodeCon conference here."
The concept is simply amazing, and while it's been around for ages, it stills needs more acceptance from decision makers that tend to stereotype on perimeter and antivirus defense only. Let's start from the basics, it is my opinion that users do more surfing than downloading, that is, the Web and its insecurities represent a greater threat than users receiving malware in their mailboxes or IMs. And not that they don't receive any, but I see a major shift towards URL droppers, and while defacement groups are more than willing to share these with phishers etc., a URL dropper is easily getting replaced by an IP one, so you end up having infected PCs infecting others through hosting and distributing the malware, so sneaky, isn't it? My point is that initiatives such as crawling the web for malicious sites, listing, categorizing and updating their status is a great, both security, and business sound opportunity. The way you know the bad neighbourhoods around your town, in that very same way you need a visualization to assist in research, or act as a security measure, and while its hard to map the Web and keep it up to date, I find the idea great!
So what is SiteAdvisor up to? Another build-to-flip startup? I doubt so as I can almost feel the smell of quality entrepreneurship from MIT's graduates, of course, given they assign a CEO with business background :) APIs, plugins, already tested the majority of popular sites according to them, and it's for free, at least to the average Internet user who's virtual "word of mouth" will help this project get the scale and popularity necessary to see it licensed and included within current security solutions. They simply cannot test the entire Web, and I feel the shouldn't even set it as an objective, instead map the most trafficked web sites or do so on-the-fly with the top 20 results from Google. I wonder how are downloads tested, are they run through VirusTotal for instance, and how significant could a "push" approach from the end users, thus submitting direct links to malicious files found within to domain for automatic analysis, sound in here?
I think the usefulness of their idea could only be achieved with the cooperation/acquisition of a leading search engine, my point is that some of the project's downsizes are the lack of on-the-fly ability(that would be like v2.0 and a major breakthrough in respect to performance), how it's lacking the resources to catch up with Google on the known web (25,270,000,000 according to them recently), how IP droppers instead of URL based ones totally ruin the idea in real-life situations(it takes more efforts to register and maintain a domain, compared to using a zombie host's capabilities to do the same, doesn't it?)
In one of my previous posts on why you should aim higher than antivirus signatures protection only I mentioned some of my ideas on "Is client side sandboxing an alternative as well, could and would a customer agree to act as a sandbox compared to the current(if any!) contribution of forwarding a suspicious sample? Would v2.0 constitute of a collective automated web petrol in a PC's "spare time"?
Crawling for malicious content and making sense of the approaches used in order to provide an effective solutions is very exciting topic. As a matter of fact in one of my previous posts "What search engines know, or may find about us?" I mentioned about the existence of a project to mine the Web for terrorist sites dating back to 2001. And I'm curious on its progress in respect to the current threat of Cyberterrorism, I feel both, crawling for malicious content and terrorist propaganda have a lot in common. Find the bad neighbourhoods, and have your spiders do whatever you instruct them to do, but I still feel quality and in-depth overview would inevitably be sacrificed for automation.
What do you think is its potential of web crawling for malicious content, and by malicious I also include harmful in respect to Cyberterrorism PSYOPS (I once came across a comic PSYOPS worth reading!) techniques that I come across on a daily basis? Feel free to test any site you want, or browse through their catalogue as well.
You can also find more info on the topic, and alternative crawling solutions, projects and Cyberterrorism activities online here :
A Crawler-based Study of Spyware on the Web
Covert Crawling: A Wolf Among Lambs
IP cloaking and competitive intelligence/disinformation
Automated Web Patrol with HoneyMonkeys Finding Web Sites That Exploit Browser Vulnerabilities
The Strider HoneyMonkey Project
STRIDER : A Black-box, State-based Approach to Change and Configuration Management and Support
Webroot's Phileas Malware Crawler
Methoden und Verfahren zur Optimierung der Analyse von Netzstrukturen am Beispiel des AGN-Malware Crawlers (in German)
Jihad Online : Islamic Terrorists and the Internet
Right-wing Extremism on the Internet
Terrorist web sites courtesy of the SITE Institute
The HATE Directory November 2005 update (very rich content!)
Recruitment by Extremist Groups on the Internet
security, information security, SiteAdvisor, web crawler, search engine, cyberterrorism
Monday, February 13, 2006
A lot of buzz on the CME-24 front, and I feel quite a lot of time was spent on speculating on the infected population out of a web counter whose results weren't that very accurate as originally though. And as vendors closely cooperated to build awareness on the destructive payload, I think that's the first victory for 2006, no windows of opportunity The best is that CAIDA patiently waited until the buzz is over to actually come up with reliable statistics on Nyxem.
It's rather quiet on the AV radars' from the way I see it, and quickly going through F-Secure's, Kaspersky's (seem to be busy analyzing code, great real-time stats!), Symantec's I came across the similarities you can feel for yourself in "the wild" :) Symantec's ThreatCon is normal, what's interesting to note is VirusTotal's flood of detected WMF's, which is perhaps a consequence of the *known* second vulnerability. James Ancheta's case was perhaps the first known and so nicely documented on botnet power on demand. Recently, a botnet, or the participation in such shut down a hospital's network, more over I think StormPay didn't comply with a DDoS extortion attempt during the weekend? Joanna Rutkowska provided more insights on stealth malware in her research (slides, demo) about "about new generation of stealth malware, so called Stealth by Design (SbD) malware, which doesn't use any of the classic rootkit technology tricks, but still offers full stealth. The presentation also focuses on limitations of the current anti-rootkit technology and why it’s not useful in fighting SbD malware. Consequently, alternative method for compromise detection is advocated in this presentation, Explicit Compromise Detection (ECD), as well as the challenges which Independent Software Vendors encounter when trying to implement ECD for Windows systems – I call it Memory Reading Problem (MRP). " How sound is the possibility of malware heading towards the BIOS anyway? An "Intelligent P2P worm's activity" that I just across to also deserves to be mentioned, the concept is great, still the authors have to figure out how to come up with legitimate file sizes for multimedia files if they really want to fake its existence, what do you think on this?
Some recent research and articles worth mentioning are, Kaspersky's Malware - Evolution : October - December 2005 outlines the possibilities for cryptoviral extortion attacks, 0days vulnerabilities, and how the WMF bug got purchased/sold for $4000. There's also been quite a lot of new trojans analyzed by third-party researchers, and among the many recent articles that made me an impression are "Malicious Malware: attacking the attackers, part 1" and part 2, from the article :
"This article explores measures to attack those malicious attackers who seek to harm our legitimate systems. The proactive use of exploits and bot networks that fight other bot networks, along with social engineering and attacker techniques are all discussed in an ethical manner."
Internet worms and IPv6 has nice points, still I wish there were only network based worms to bother about. Besides all I've missed important concepts in various commentaries, did you? Malware is still vulnerabilities/social engineering attacks split at least for the last several months, still the increased corporate and home IM usage will inevitable lead to many more security threats to worry about. Web platform worms such as MySpace and Google's AdSense Trojan, are slowly gaining grounds as a Web 2.0 concept, so virus or IDS signatures are to look for, try both!
During January, David Aitel reopened the subject of beneficial worms out of Vesselin Bontchev's research on "good worms". While I have my reservations on such a concept that would have to do with patching mostly the way I see it, could exploiting a vulnerability in a piece of malware by considered useful some day, or could a network mapping worm launched in the wild act as an early response system on mapped targets that could end up in a malware's "hitlist"? And I also think the alternative to such an approach going beyond the network level is Johnny Long's(recent chat with him) Google Dorks Hacking Database, you won't need to try to map the unlimited IPv6 address space looking for preys. Someone will either do the job for you, or with the time, transparancy in IPv6, one necessary for segmented and targeted attacks will be achieved as well.
Several days ago, Kaspersky released their summary for 2005, nothing ground breaking in here compared to previous research on how the WMF vulnerability was purchased/sold for $4000 :) but still, it's a very comprehensive and in-depth summary of 2005 in respect to the variables of a malware they keep track of. I recommend you to go through it. What made me an impression?
- +272% Trojan-Downloaders 2005 vs 2004
- +212% Trojan-Dropper 2005 vs 2004
- +413% Rootkit 2005 vs 2004
- During 2005, on average 28 new rootkits a month
- IM worms 32 modifications per month
- IRC worms are on -31%
- P2P worms are on -43%, the best thing is that Kaspersky labs also shares my opinion on the reason for the decline, P2P busts and general prosecutions for file-sharing. What's also interesting is to mention is the recent ruling in a district court in Paris on the "legality of P2P" in France and the charge of 5 EUR per month for access to P2P, but for how long? :) P2P filesharing isn't illegal and if you cannot come up with a way to release your multimedia content online, don't bother doing at all. In previous chats I had with Eric Goldman, he also makes some very good points on the topic.
- +68% Exploit, that is software vulnerabilities and the use of exploits both known or 0day's with the idea to easily exploit targeted PC, though I'm expecting the actual percentage to be much higher
- Internet banking malware reached a record 402% growth rate by the end of 2005 The Trojan.Passwd is a very good example, it clearly indicates that it is written for financial gains. E-banking can indeed prove dangerous sometimes, and while I'm not being a paranoid in here, I'd would recommend you go through Candid's well written "Threats to Consider when doing E-banking" paper
- A modest growth from 22 programs per month in 2004 to 31 in 2005 on the Linux malware front
I feel today's malware scene is so vibrant that it's getting more and more complex to keep track of possible propagation vectors, ecosystem here and there, and mostly communicating what's going on to the general public(actually this one isn't).
- the commercialization of the market for software vulnerabilities, where we have the first underground purchase of the WMF exploit, so have software vulnerabilities always been the currency of trade in the security world or they've started getting the necessary attention recently?
- is stealth malware more than an issue compared to utilizing 0day vulnerabilities, and is retaining current zombie PCs a bigger priority than to infecting new ones?
- business competitors, enemies, unethical individuals are actively seeking for undetected pieces of malware coded especially for their needs, these definitely go beneath the sensors
- Ancheta's case is a clear indication of a working Ecosystem from my point of view, that goes as high as to provide after-sale services such as DDoS strength consultations and 0day malware on demand
To sum up, malware tends to look so sneaky when spreading and zoomed out :) I originally came across the VisualComplexity project in one of my previous posts on visualization. Feel I've missed something that's worth mentioning during the last two months? Than consider expanding the discussion!
Fileprint analysis for Malware Detection
Thursday, February 09, 2006
- that the other sides are actively developing such capabilities, and they are, because they think the opposite => arms race
- growing trend towards asymmetric warfare
- cost-effectiveness compared to building a multimillion nuclear submarine as a statement of power?
In my research research on the Future trends of Malware, I pointed out some of the trends related to botnets and DDoS attacks, namely, DDoS extortion, DDoS on demand/hire, and with the first legally prosecuted case of offering botnet access on demand, it's a clear indication that of where things are going. Defense against frontal attacks isn't cost-effective given that at the bottom line the costs to maintain the site outpace the revenues generated for the time, hard dollars disappear, soft ones as reputation remain the same.
My advice is to take into consideration the possibility to outsource your problem, and stay away from product line extensions, and I think it's that very simple. A differentiated service on fighting infected nodes is being offered by Sophos, namely the Zombie Alert, which makes me wonder why the majority of AV vendors besides them haven't come up with an alternative given the data their sensor networks are able to collect? Moreover, should such as service be free, would it end up as a licensed extensions to be included within the majority of security solutions, and can a motivated system administrators successfully detect, block, and isolate zombie traffic going out of the network(I think yes!)? As far as botnets are concerned, there were even speculations on using "Skype to control botnets", now who would want to do that, and under what reason given the current approaches for controlling botnets, isn't the use of cryptography or security through obscurity("talkative bots", stripping IRCds) the logical "evolution" in here?
Something else worth mentioning is the trend of how DoS attacks got totally replaced by DDoS ones, my point is that the first can be a much more sneaky one and easily go beneath the radar, compared to a large scale DDoS attack. A single packet can be worth more than an entire botnets population, isn't it?
How do you think DDoS attacks should be prevented, active defense such as the solutions mentioned, or proactive solutions? What do you think?