Tag Archives: scanning

[ISN] Secret DHS Audit Could Prove Governmentwide Hacker Surveillance Isn’t Really Governmentwide

www.nextgov.com/cybersecurity/2015/11/secret-dhs-audit-could-prove-governmentwide-network-surveillance-isnt-really-governmentwide/124018/ By Aliya Sternstein Nextgov.com November 25, 2015 A secret federal audit substantiates a Senate committee’s concerns about underuse of a governmentwide cyberthreat surveillance tool, the panel’s chairman says. The intrusion-prevention system, named EINSTEIN 3 Accelerated, garnered both ridicule and praise following a hack of 21.5 million records on national security employees and their relatives. The scanning tool failed to block the attack, on an Office of Personnel network, because it can only detect malicious activity that people have seen before. At OPM, the attackers, believed to be well-resourced Chinese cyber sleuths, used malware that security researchers and U.S. spies had never witnessed. Still, EINSTEIN came in handy, according to U.S. officials, after the OPM malware was identified through other monitoring tools. The Department of Homeland Security loaded EINSTEIN with the “indicators” of the attack pattern so it could scan for matching footprints on other government networks. […]




Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Secret DHS Audit Could Prove Governmentwide Hacker Surveillance Isn’t Really Governmentwide

www.nextgov.com/cybersecurity/2015/11/secret-dhs-audit-could-prove-governmentwide-network-surveillance-isnt-really-governmentwide/124018/ By Aliya Sternstein Nextgov.com November 25, 2015 A secret federal audit substantiates a Senate committee’s concerns about underuse of a governmentwide cyberthreat surveillance tool, the panel’s chairman says. The intrusion-prevention system, named EINSTEIN 3 Accelerated, garnered both ridicule and praise following a hack of 21.5 million records on national security employees and their relatives. The scanning tool failed to block the attack, on an Office of Personnel network, because it can only detect malicious activity that people have seen before. At OPM, the attackers, believed to be well-resourced Chinese cyber sleuths, used malware that security researchers and U.S. spies had never witnessed. Still, EINSTEIN came in handy, according to U.S. officials, after the OPM malware was identified through other monitoring tools. The Department of Homeland Security loaded EINSTEIN with the “indicators” of the attack pattern so it could scan for matching footprints on other government networks. […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Security Tool Tricks Workers Into Spilling Company Secrets

http://www.wired.com/2015/08/ava-human-vulnerability-scanner-finds-your-weakest-security-link/ By Klint Finley Business Wired.com 08.11.15 TRICKING PEOPLE INTO bypassing security measures, revealing passwords, and disclosing confidential information is called “social engineering” in the computer security business. It’s a huge problem, and it’s one Laura Bell, founder of the New Zealand security consultancy SafeStack, was contemplating while home on maternity leave two years ago. Although many companies have mandatory security trainings, she realized there’s no real way of knowing whether such training is effective until it’s too late. What her clients really needed, she decided, was a way to identifying the employees most vulnerable to social engineering attacks. There wasn’t anything like that available at the time, so working in half-hour increments as her daughter slept, she created AVA, a free open-source tool for what Bell calls human vulnerability scanning. But not everyone is happy with the results. “Some people have said I should go to prison for releasing this,” Bell says. First, a hypothetical example of social engineering at work. Imagine you’re a junior help desk technician at a large company. You’re low on the corporate ladder, and constantly worried about keeping your job. One night you get a text from a number you don’t recognize. “It’s Ted,” the message reads. “I need my password reset immediately. Lots of money riding on this deal.” […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] No, You Really Can’t

https://blogs.oracle.com/maryanndavidson/entry/no_you_really_can_t Mary Ann Davidson Blog By User701213-Oracle Aug 10, 2015 I have been doing a lot of writing recently. Some of my writing has been with my sister, with whom I write murder mysteries using the nom-de-plume Maddi Davidson. Recently, we’ve been working on short stories, developing a lot of fun new ideas for dispatching people (literarily speaking, though I think about practical applications occasionally when someone tailgates me). Writing mysteries is a lot more fun than the other type of writing I’ve been doing. Recently, I have seen a large-ish uptick in customers reverse engineering our code to attempt to find security vulnerabilities in it. This is why I’ve been writing a lot of letters to customers that start with “hi, howzit, aloha” but end with “please comply with your license agreement and stop reverse engineering our code, already.” I can understand that in a world where it seems almost every day someone else had a data breach and lost umpteen gazillion records to unnamed intruders who may have been working at the behest of a hostile nation-state, people want to go the extra mile to secure their systems. That said, you would think that before gearing up to run that extra mile, customers would already have ensured they’ve identified their critical systems, encrypted sensitive data, applied all relevant patches, be on a supported product release, use tools to ensure configurations are locked down – in short, the usual security hygiene – before they attempt to find zero day vulnerabilities in the products they are using. And in fact, there are a lot of data breaches that would be prevented by doing all that stuff, as unsexy as it is, instead of hyperventilating that the Big Bad Advanced Persistent Threat using a zero-day is out to get me! Whether you are running your own IT show or a cloud provider is running it for you, there are a host of good security practices that are well worth doing. Even if you want to have reasonable certainty that suppliers take reasonable care in how they build their products – and there is so much more to assurance than running a scanning tool – there are a lot of things a customer can do like, gosh, actually talking to suppliers about their assurance programs or checking certifications for products for which there are Good Housekeeping seals for (or “good code” seals) like Common Criteria certifications or FIPS-140 certifications. Most vendors – at least, most of the large-ish ones I know – have fairly robust assurance programs now (we know this because we all compare notes at conferences). That’s all well and good, is appropriate customer due diligence and stops well short of “hey, I think I will do the vendor’s job for him/her/it and look for problems in source code myself,” even though: A customer can’t analyze the code to see whether there is a control that prevents the attack the scanning tool is screaming about (which is most likely a false positive) A customer can’t produce a patch for the problem – only the vendor can do that A customer is almost certainly violating the license agreement by using a tool that does static analysis (which operates against source code) I should state at the outset that in some cases I think the customers doing reverse engineering are not always aware of what is happening because the actual work is being done by a consultant, who runs a tool that reverse engineers the code, gets a big fat printout, drops it on the customer, who then sends it to us. Now, I should note that we don’t just accept scan reports as “proof that there is a there, there,” in part because whether you are talking static or dynamic analysis, a scan report is not proof of an actual vulnerability. Often, they are not much more than a pile of steaming … FUD. (That is what I planned on saying all along: FUD.) This is why we require customers to log a service request for each alleged issue (not just hand us a report) and provide a proof of concept (which some tools can generate). If we determine as part of our analysis that scan results could only have come from reverse engineering (in at least one case, because the report said, cleverly enough, “static analysis of Oracle XXXXXX”), we send a letter to the sinning customer, and a different letter to the sinning consultant-acting-on-customer’s behalf – reminding them of the terms of the Oracle license agreement that preclude reverse engineering, So Please Stop It Already. (In legalese, of course. The Oracle license agreement has a provision such as: “Customer may not reverse engineer, disassemble, decompile, or otherwise attempt to derive the source code of the Programs…” which we quote in our missive to the customer.) Oh, and we require customers/consultants to destroy the results of such reverse engineering and confirm they have done so. Why am I bringing this up? The main reason is that, when I see a spike in X, I try to get ahead of it. I don’t want more rounds of “you broke the license agreement,” “no, we didn’t,” yes, you did,” “no, we didn’t.” I’d rather spend my time, and my team’s time, working on helping development improve our code than argue with people about where the license agreement lines are. Now is a good time to reiterate that I’m not beating people up over this merely because of the license agreement. More like, “I do not need you to analyze the code since we already do that, it’s our job to do that, we are pretty good at it, we can – unlike a third party or a tool – actually analyze the code to determine what’s happening and at any rate most of these tools have a close to 100% false positive rate so please do not waste our time on reporting little green men in our code.” I am not running away from our responsibilities to customers, merely trying to avoid a painful, annoying, and mutually-time wasting exercise. For this reason, I want to explain what Oracle’s purpose is in enforcing our license agreement (as it pertains to reverse engineering) and, in a reasonably precise yet hand-wavy way, explain “where the line is you can’t cross or you will get a strongly-worded letter from us.” Caveat: I am not a lawyer, even if I can use words like stare decisis in random conversations. (Except with my dog, because he only understands Hawaiian, not Latin.) Ergo, when in doubt, refer to your Oracle license agreement, which trumps anything I say herein! With that in mind, a few FAQ-ish explanations: Q. What is reverse engineering? A. Generally, our code is shipped in compiled (executable) form (yes, I know that some code is interpreted). Customers get code that runs, not the code “as written.” That is for multiple reasons such as users generally only need to run code, not understand how it all gets put together, and the fact that our source code is highly valuable intellectual property (which is why we have a lot of restrictions on who accesses it and protections around it). The Oracle license agreement limits what you can do with the as-shipped code and that limitation includes the fact that you aren’t allowed to de-compile, dis-assemble, de-obfuscate or otherwise try to get source code back from executable code. There are a few caveats around that prohibition but there isn’t an “out” for “unless you are looking for security vulnerabilities in which case, no problem-o, mon!” If you are trying to get the code in a different form from the way we shipped it to you – as in, the way we wrote it before we did something to it to get it in the form you are executing, you are probably reverse engineering. Don’t. Just – don’t. Q. What is Oracle’s policy in regards to the submission of security vulnerabilities (found by tools or not)? A. We require customers to open a service request (one per vulnerability) and provide a test case to verify that the alleged vulnerability is exploitable. The purpose of this policy is to try to weed out the very large number of inaccurate findings by security tools (false positives). Q. Why are you going after consultants the customer hired? The consultant didn’t sign the license agreement! A. The customer signed the Oracle license agreement, and the consultant hired by the customer is thus bound by the customer’s signed license agreement. Otherwise everyone would hire a consultant to say (legal terms follow) “Nanny, nanny boo boo, big bad consultant can do X even if the customer can’t!” Q. What does Oracle do if there is an actual security vulnerability? A. I almost hate to answer this question because I want to reiterate that customers Should Not and Must Not reverse engineer our code. However, if there is an actual security vulnerability, we will fix it. We may not like how it was found but we aren’t going to ignore a real problem – that would be a disservice to our customers. We will, however, fix it to protect all our customers, meaning everybody will get the fix at the same time. However, we will not give a customer reporting such an issue (that they found through reverse engineering) a special (one-off) patch for the problem. We will also not provide credit in any advisories we might issue. You can’t really expect us to say “thank you for breaking the license agreement.” Q. But the tools that decompile products are getting better and easier to use, so reverse engineering will be OK in the future, right? A. Ah, no. The point of our prohibition against reverse engineering is intellectual property protection, not “how can we cleverly prevent customers from finding security vulnerabilities – bwahahahaha – so we never have to fix them – bwahahahaha.” Customers are welcome to use tools that operate on executable code but that do not reverse engineer code. To that point, customers using a third party tool or service offering would be well-served by asking questions of the tool (or tool service) provider as to a) how their tool works and b) whether they perform reverse engineering to “do what they do.” An ounce of discussion is worth a pound of “no we didn’t,” “yes you did,” “didn’t,” “did” arguments. * Q. “But I hired a really cool code consultant/third party code scanner/whatever. Why won’t mean old Oracle accept my scan results and analyze all 400 pages of the scan report?” A. Hoo-boy. I think I have repeated this so much it should be a song chorus in a really annoying hip hop piece but here goes: Oracle runs static analysis tools ourselves (heck, we make them), many of these goldurn tools are ridiculously inaccurate (sometimes the false positive rate is 100% or close to it), running a tool is nothing, the ability to analyze results is everything, and so on and so forth. We put the burden on customers or their consultants to prove there is a There, There because otherwise, we waste a boatload of time analyzing – nothing** – when we could be spending those resources, say, fixing actual security vulnerabilities. Q. But one of the issues I found was an actual security vulnerability so that justifies reverse engineering, right? A. Sigh. At the risk of being repetitive, no, it doesn’t, just like you can’t break into a house because someone left a window or door unlocked. I’d like to tell you that we run every tool ever developed against every line of code we ever wrote, but that’s not true. We do require development teams (on premises, cloud and internal development organizations) to use security vulnerability-finding tools, we’ve had a significant uptick in tools usage over the last few years (our metrics show this) and we do track tools usage as part of Oracle Software Security Assurance program. We beat up – I mean, “require” – development teams to use tools because it is very much in our interests (and customers’ interests) to find and fix problems earlier rather than later. That said, no tool finds everything. No two tools find everything. We don’t claim to find everything. That fact still doesn’t justify a customer reverse engineering our code to attempt to find vulnerabilities, especially when the key to whether a suspected vulnerability is an actual vulnerability is the capability to analyze the actual source code, which – frankly – hardly any third party will be able to do, another reason not to accept random scan reports that resulted from reverse engineering at face value, as if we needed one. Q. Hey, I’ve got an idea, why not do a bug bounty? Pay third parties to find this stuff! A. Bug bounties are the new boy band (nicely alliterative, no?) Many companies are screaming, fainting, and throwing underwear at security researchers**** to find problems in their code and insisting that This Is The Way, Walk In It: if you are not doing bug bounties, your code isn’t secure. Ah, well, we find 87% of security vulnerabilities ourselves, security researchers find about 3% and the rest are found by customers. (Small digression: I was busting my buttons today when I found out that a well-known security researcher in a particular area of technology reported a bunch of alleged security issues to us except – we had already found all of them and we were already working on or had fixes. Woo hoo!) I am not dissing bug bounties, just noting that on a strictly economic basis, why would I throw a lot of money at 3% of the problem (and without learning lessons from what you find, it really is “whack a code mole”) when I could spend that money on better prevention like, oh, hiring another employee to do ethical hacking, who could develop a really good tool we use to automate finding certain types of issues, and so on. This is one of those “full immersion baptism” or “sprinkle water over the forehead” issues – we will allow for different religious traditions and do it OUR way – and others can do it THEIR way. Pax vobiscum. Q. If you don’t let customers reverse engineer code, they won’t buy anything else from you. A. I actually heard this from a customer. It was ironic because in order for them to buy more products from us (or use a cloud service offering), they’d have to sign – a license agreement! With the same terms that the customer had already admitted violating. “Honey, if you won’t let me cheat on you again, our marriage is through.” “Ah, er, you already violated the ‘forsaking all others’ part of the marriage vow so I think the marriage is already over.” The better discussion to have with a customer —and I always offer this — is for us to explain what we do to build assurance into our products, including how we use vulnerability finding tools. I want customers to have confidence in our products and services, not just drop a letter on them. Q. Surely the bad guys and some nations do reverse engineer Oracle’s code and don’t care about your licensing agreement, so why would you try to restrict the behavior of customers with good motives? A. Oracle’s license agreement exists to protect our intellectual property. “Good motives” – and given the errata of third party attempts to scan code the quotation marks are quite apropos – are not an acceptable excuse for violating an agreement willingly entered into. Any more than “but everybody else is cheating on his or her spouse” is an acceptable excuse for violating “forsaking all others” if you said it in front of witnesses. At this point, I think I am beating a dead – or should I say, decompiled – horse. We ask that customers not reverse engineer our code to find suspected security issues: we have source code, we run tools against the source code (as well as against executable code), it’s actually our job to do that, we don’t need or want a customer or random third party to reverse engineer our code to find security vulnerabilities. And last, but really first, the Oracle license agreement prohibits it. Please don’t go there. * I suspect at least part of the anger of customers in these back-and-forth discussions is because the customer had already paid a security consultant to do the work. They are angry with us for having been sold a bill of goods by their consultant (where the consultant broke the license agreement). ** The only analogy I can come up with is – my bookshelf. Someone convinced that I had a prurient interest in pornography could look at the titles on my bookshelf, conclude they are salacious, and demand an explanation from me as to why I have a collection of steamy books. For example (these are all real titles on my shelf): Thunder Below! (“whoo boy, must be hot stuff!”) Naked Economics (“nude Keynesians!”)*** Inferno (“even hotter stuff!”) At Dawn We Slept (“you must be exhausted from your, ah, nighttime activities…”) My response is that I don’t have to explain my book tastes or respond to baseless FUD. (If anybody is interested, the actual book subjects are, in order, 1) the exploits of WWII submarine skipper and Congressional Medal of Honor recipient CAPT Eugene Fluckey, USN 2) a book on economics 3) a book about the European theater in WWII and 4) the definitive work concerning the attack on Pearl Harbor. ) *** Absolutely not, I loathe Keynes. There are more extant dodos than actual Keynesian multipliers. Although “dodos” and “true believers in Keynesian multipliers” are interchangeable terms as far as I am concerned. **** I might be exaggerating here. But maybe not.


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Hacker Claims Feds Hit Him With 44 Felonies When He Refused to Be an FBI Spy

http://www.wired.com/2015/02/hacker-claims-feds-hit-44-felonies-refused-fbi-spy/ By Andy Greenberg Threat Level Wired.com 02.18.15 A year ago, the Department of Justice threatened to put Fidel Salinas in prison for the rest of his life for hacking crimes. But before the federal government brought those charges against him, Salinas now says, it tried a different tactic: recruiting him. A Southern District of Texas judge sentenced Salinas earlier this month to six months in prison and a $10,600 fine after he pleaded guilty to a misdemeanor count of computer fraud and abuse. The charge stemmed from his repeatedly scanning the local Hidalgo County website for vulnerabilities in early 2012. But just months before he took that plea, the 28-year-old with ties to the hacktivist group Anonymous instead faced 44 felony hacking and cyberstalking charges, all of which were later dismissed. And now that his case is over, Salinas is willing to say why he believes he faced that overwhelming list of empty charges. As he tells it, two FBI agents asked him to hack targets on the bureau’s behalf, and he refused. Over the course of a six-hour FBI interrogation in May, 2013, months after his arrest, Salinas says two agents from the FBI’s Southern District of Texas office asked him to use his skills to gather information on Mexican drug cartels and local government figures accepting bribes from drug traffickers. “They asked me to gather information on elected officials, cartel members, anyone I could get data from that would help them out,” Salinas told WIRED in a phone interview before his sentencing. “I told them no.” “Fundamentally this represents the FBI trying to recruit by indictment,” says Salinas’ lawyer Tor Ekeland, who took the case pro bono last year. “The message was clear: If he had agreed to help them, they would have dropped the charges in a second.” […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] The Security Setup – HD Moore

http://www.thesecuritysetup.com/home/2014/10/1/hd-moore [Interesting website I found while following someone else who was profiled earlier, Uri with @redteamsblog, the idea here is ‘what setup do folks in security use to attack, defend, build, break, hack, crack, secure, etc.’ which should make for some interesting reading. – WK] H D Moore OCTOBER 1, 2014 Who are you, and what do you do? My name is H D Moore (since the day I was born, it doesn’t stand for anything). I am a security researcher and the chief research officer for Rapid7. Some folks may be familiar with my work on Metasploit, but these days I also spend a lot of time scanning the internet as part of Project Sonar. My servers send friendly greetings to your servers at least once a week. Howdy! What hardware & operating systems do you use? Lots. My normal workload involves crunching a billion records at a time, running a dozen different operating systems, and still handling corporate stuff via Outlook and PowerPoint. As of 2009, I finally made the switch to Windows as my primary OS after being a die-hard Linux user since 1995. That doesn’t mean that I use Windows itself all that much, but I find it to be a useful environment to run virtual machines and access the rest of my hardware with SSH and X11. The tipping point was the need to quickly respond to corporate email and edit Office documents without using a dedicated virtual machine or mangling the contents in the process. The second benefit to using Windows is on the laptop front; Suspend, resume, and full hardware support don’t involve weeks of tuning just to have a portable machine. Finally, I tend to play a lot of video games as well, which work best on overspecced Windows hardware. All that said, Windows as productivity platform isn’t great, and almost all of my real work occurs in web browsers (Chrome), virtual machines (VMWare for Intel/AMD64 and QEmu for RISC), and SSH-forwarded XFCE4 tabbed-terminals. The laptop I currently use started life as a banged up ASUS ROG G750 (17″) bought as the display model from a Best Buy. The drives, video card, and memory were swapped out bringing the total specs up to 32Gb RAM, a 512Gb SSD boot disk, a 1Tb backup disk, and a GeForce GTX 770 GPU. This runs the most loathed operating system of all, Windows 8.1 (Update 1) Enterprise, but it has a huge screen, was relatively cheap, and can run my development virtual machines without falling over. It also runs Borderlands2 and Skyrim at maximum settings, critical features for any mobile system. Given that the total cost was under $1,500, it is a great machine for working on the road and blocking automatic weapons fire (as its weighs about 20 Lbs with accessories). I carry this beast around in a converted ammunition bag, sans the grenade pouches. […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Thousands Of People Oblivious To Fact That Anyone On The Internet Can Access Their Computers

http://www.forbes.com/sites/kashmirhill/2014/08/13/so-many-pwns/ By Kashmir Hill Forbes Staff 8/13/2014 There are technologists who specialize in “scanning the Internet.” They are like a search team making its way through a neighborhood, but instead of checking the knob of every door, they check Internet entrances to online devices to see which ones are open. These people have been screaming for some time that there is a lot of stuff exposed on the Internet that shouldn’t be: medical devices, power plants, surveillance cameras, street lights, home monitoring systems, and on and on. But incredibly, their message doesn’t seem to get through, because their scans keep on picking up new devices. While talking about the issue at hacker conference Defcon on Sunday, security engineer Paul McMillan sent his winged monkey scanners out looking for computers that have remote access software on them, but no password. In just that short hour, the results came pouring in: thousands of computers on port 5900 using a program called VNC for remote access. The total number is likely over 30,000. Those using the program failed to password-protect it, meaning anyone who comes looking can see what they’re doing, and manipulate their computers. McMillan set a scanner to take a screenshot of every exposed computer it came across. I went through the screens captured Sunday and saw people checking Facebook, playing video games, watching Ender’s Game, reading Reddit, Skyping, reviewing surveillance cameras, shopping on Amazon, reading email, editing price lists and bills, and, of course, watching porn. I saw access screens for pharmacies, point of sale systems, power companies, gas stations, tech and media companies, a cattle-tracking company, and hundreds of cabs in Korea. This isn’t just about watching people use their computers; the fact that the scanner got in means anyone could manipulate the devices, changing the power company’s settings, pausing the porn stream, going through a company’s records, or reviewing the prescriptions for a pharmacy’s patients. There is no need for hackers to go to great lengths to compromise these computers; their owners have built in backdoors with no locks. “It’s like leaving your computer open, unlocked and ready to rock in a crowded bus terminal and walking away,” says security engineer Dan Tentler, who presented with McMillan. Increasingly, everything is connected to the Internet, and unfortunately, people don’t always know how to connect their things securely. “It’s important to remember that this scan only scratches the very surface of the problem,” says McMillan. “We can’t legally scan for default passwords, but I’m certain if we did, the results would be orders of magnitude worse.” […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Antivirus pioneer Symantec declares AV “dead” and “doomed to failure”

http://arstechnica.com/security/2014/05/antivurus-pioneer-symantec-declares-av-dead-and-doomed-to-failure/ By Dan Goodin Ars Technica May 5, 2014 Commercial antivirus pioneer Symantec has finally admitted publicly what critics have been saying for years: the growing inability of the scanning software to detect the majority of malware attacks makes it “dead” and “doomed to failure,” according to a published report. Over the past two reported quarters, Symantec has watched revenue fall, and sales are expected to flag again in the most recent period when the company releases financial results later this week, an article published Monday by The Wall Street Journal reported. The declines come as Juniper Networks, FireEye, and other companies have rolled out products and services that take a decidedly different approach to securing computers and networks. Rather than scan for files that are categorized as malicious, these newer techniques aim to detect, minimize, and contain the damage that attackers can do in the event that they penetrate a customer’s defenses. Citing Symantec Senior President Brian Dye, the WSJ said: […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail