Tag Archives: behavior

My Latest Gartner Research: Enterprise Firewall and Unified Threat Management Products Impact End-User Buying Behavior

This document helps product developers and managers of security providers prepare enterprise firewall and unified threat management products for the impact of digital business, mobile and the Internet of Things on end-user buying behavior….

Gartner Subscribers can access this research by clicking here.




Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Russian Satellite Maneuvers, Silence Worry Intelsat

spacenews.com/russian-satellite-maneuvers-silence-worry-intelsat/ By Mike Gruss Spacenews.com October 9, 2015 WASHINGTON – A mysterious Russian military satellite parked itself between two Intelsat satellites in geosynchronous orbit for five months this year, alarming company executives and leading to classified meetings among U.S. government officials. The Russian satellite, alternatively known as Luch or Olymp, launched in September 2014 and seven months later moved to a position directly between the Intelsat 7 and Intelsat 901 satellites, which are located within half a degree of one another 36,000 kilometers above the equator. At times, the Russian satellite maneuvered to about 10 kilometers of the Intelsat space vehicles, sources said, a distance so close that company leaders believed their satellites could be at risk. The satellite’s movements were highlighted by Brian Weeden, technical adviser at the Secure World Foundation, in an Oct. 5 analysis of Russian rendezvous and proximity operations for SpaceNews’ sister publication, the Space Review. “This is not normal behavior and we’re concerned,” Kay Sears, president of Intelsat General, the government services arm of Intelsat, said in an Oct. 8 interview with SpaceNews. “We absolutely need responsible operators. Space is a domain that has to be protected.” […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Most corporate risk due to just 1% of employees

http://www.csoonline.com/article/2975914/application-security/most-corporate-risk-due-to-just-1-of-employees.html By Maria Korolov CSO Aug 26, 2015 Just 1 percent of employees are responsible for 75 percent of cloud-related enterprise security risk, and companies can dramatically reduce their exposure at very little additional cost by paying extra attention to these users. According to newly-released research by CloudLock, which analyzed the behavior of 10 million users during the second quarter of this year, these users are sending out plain-text passwords, sharing files, accidentally downloading malware, clicking on phishing links, using risky applications, reusing passwords, and engaging in other types of dangerous behaviors. These users include both rank-and-file employees as well as super-privileged users, software architects, and non-human accounts used to perform automated tasks. According to the most recent Verizon data breach report, the two biggest attack vectors, responsible for more than two-thirds of all breaches last year, involved stolen credentials or phishing. […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Trust no one: A better way to close the security gap?

http://gcn.com/articles/2015/08/19/zero-trust-security.aspx By Paul McCloskey GCN.com Aug 19, 2015 Agencies are increasingly turning to predictive analytics to root out fraud, but those aren’t the only tools being used to spot and control anomalous behavior. New identity security tools are emerging to help enterprises that might be victimized in fraud schemes enabled by insiders or attackers using insider credentials. Those users have been at the center of several recent high-profile attacks. Their privileges were exploited as the result of sophisticated spear-phishing attacks, including the one on health insurer Anthem earlier this year in which 80 million records were stolen. “These are privileged users with access to everything in the database — not just their records; they have the ability to go from system to system inside a corporate or government infrastructure,” said Ken Ammon, chief strategy officer at Xceedium. “What happens is criminals target those individuals because they know their roles or their accounts are extremely powerful in the organization,” Ammon said. “If they can send them an email that they might click on, it installs as a super user who now can download the entire corporate database from network to network.” To help defend against that vulnerability, Xceedium has embraced a policy of “zero trust,” whereby access is extended only for a specific reason and for a specific amount of time. […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Pentagon Researchers Will Wage Counterattack on Crippling ‘DDoS’ Cyber Strikes

http://www.nextgov.com/cybersecurity/2015/08/pentagon-researchers-will-wage-counterattack-crippling-ddos-cyber-strikes/119192/ By Aliya Sternstein Nextgov.com August 17, 2015 The Pentagon has in mind a three-pronged counterattack against a decades-old form of cyber assault that continues to paralyze government and industry networks, despite its low cost of sometimes $10 a hit. Beginning next spring, military-funded researchers are scheduled to produce new tools that would quickly enable organizations to bounce back from so-called distributed denial-of-service attacks. A recovery rate of at most 10 seconds is the goal, according to the Defense Department. Today, attackers have a relatively easy time aiming bogus traffic at computer servers to knock them offline. One reason is that computer systems often are consolidated, making for a wide target area. Another weakness is the predictable behavior of systems that support Web services. And finally, certain types of DDoS attacks that evince little malicious traffic go undetected. […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] No, You Really Can’t

https://blogs.oracle.com/maryanndavidson/entry/no_you_really_can_t Mary Ann Davidson Blog By User701213-Oracle Aug 10, 2015 I have been doing a lot of writing recently. Some of my writing has been with my sister, with whom I write murder mysteries using the nom-de-plume Maddi Davidson. Recently, we’ve been working on short stories, developing a lot of fun new ideas for dispatching people (literarily speaking, though I think about practical applications occasionally when someone tailgates me). Writing mysteries is a lot more fun than the other type of writing I’ve been doing. Recently, I have seen a large-ish uptick in customers reverse engineering our code to attempt to find security vulnerabilities in it. This is why I’ve been writing a lot of letters to customers that start with “hi, howzit, aloha” but end with “please comply with your license agreement and stop reverse engineering our code, already.” I can understand that in a world where it seems almost every day someone else had a data breach and lost umpteen gazillion records to unnamed intruders who may have been working at the behest of a hostile nation-state, people want to go the extra mile to secure their systems. That said, you would think that before gearing up to run that extra mile, customers would already have ensured they’ve identified their critical systems, encrypted sensitive data, applied all relevant patches, be on a supported product release, use tools to ensure configurations are locked down – in short, the usual security hygiene – before they attempt to find zero day vulnerabilities in the products they are using. And in fact, there are a lot of data breaches that would be prevented by doing all that stuff, as unsexy as it is, instead of hyperventilating that the Big Bad Advanced Persistent Threat using a zero-day is out to get me! Whether you are running your own IT show or a cloud provider is running it for you, there are a host of good security practices that are well worth doing. Even if you want to have reasonable certainty that suppliers take reasonable care in how they build their products – and there is so much more to assurance than running a scanning tool – there are a lot of things a customer can do like, gosh, actually talking to suppliers about their assurance programs or checking certifications for products for which there are Good Housekeeping seals for (or “good code” seals) like Common Criteria certifications or FIPS-140 certifications. Most vendors – at least, most of the large-ish ones I know – have fairly robust assurance programs now (we know this because we all compare notes at conferences). That’s all well and good, is appropriate customer due diligence and stops well short of “hey, I think I will do the vendor’s job for him/her/it and look for problems in source code myself,” even though: A customer can’t analyze the code to see whether there is a control that prevents the attack the scanning tool is screaming about (which is most likely a false positive) A customer can’t produce a patch for the problem – only the vendor can do that A customer is almost certainly violating the license agreement by using a tool that does static analysis (which operates against source code) I should state at the outset that in some cases I think the customers doing reverse engineering are not always aware of what is happening because the actual work is being done by a consultant, who runs a tool that reverse engineers the code, gets a big fat printout, drops it on the customer, who then sends it to us. Now, I should note that we don’t just accept scan reports as “proof that there is a there, there,” in part because whether you are talking static or dynamic analysis, a scan report is not proof of an actual vulnerability. Often, they are not much more than a pile of steaming … FUD. (That is what I planned on saying all along: FUD.) This is why we require customers to log a service request for each alleged issue (not just hand us a report) and provide a proof of concept (which some tools can generate). If we determine as part of our analysis that scan results could only have come from reverse engineering (in at least one case, because the report said, cleverly enough, “static analysis of Oracle XXXXXX”), we send a letter to the sinning customer, and a different letter to the sinning consultant-acting-on-customer’s behalf – reminding them of the terms of the Oracle license agreement that preclude reverse engineering, So Please Stop It Already. (In legalese, of course. The Oracle license agreement has a provision such as: “Customer may not reverse engineer, disassemble, decompile, or otherwise attempt to derive the source code of the Programs…” which we quote in our missive to the customer.) Oh, and we require customers/consultants to destroy the results of such reverse engineering and confirm they have done so. Why am I bringing this up? The main reason is that, when I see a spike in X, I try to get ahead of it. I don’t want more rounds of “you broke the license agreement,” “no, we didn’t,” yes, you did,” “no, we didn’t.” I’d rather spend my time, and my team’s time, working on helping development improve our code than argue with people about where the license agreement lines are. Now is a good time to reiterate that I’m not beating people up over this merely because of the license agreement. More like, “I do not need you to analyze the code since we already do that, it’s our job to do that, we are pretty good at it, we can – unlike a third party or a tool – actually analyze the code to determine what’s happening and at any rate most of these tools have a close to 100% false positive rate so please do not waste our time on reporting little green men in our code.” I am not running away from our responsibilities to customers, merely trying to avoid a painful, annoying, and mutually-time wasting exercise. For this reason, I want to explain what Oracle’s purpose is in enforcing our license agreement (as it pertains to reverse engineering) and, in a reasonably precise yet hand-wavy way, explain “where the line is you can’t cross or you will get a strongly-worded letter from us.” Caveat: I am not a lawyer, even if I can use words like stare decisis in random conversations. (Except with my dog, because he only understands Hawaiian, not Latin.) Ergo, when in doubt, refer to your Oracle license agreement, which trumps anything I say herein! With that in mind, a few FAQ-ish explanations: Q. What is reverse engineering? A. Generally, our code is shipped in compiled (executable) form (yes, I know that some code is interpreted). Customers get code that runs, not the code “as written.” That is for multiple reasons such as users generally only need to run code, not understand how it all gets put together, and the fact that our source code is highly valuable intellectual property (which is why we have a lot of restrictions on who accesses it and protections around it). The Oracle license agreement limits what you can do with the as-shipped code and that limitation includes the fact that you aren’t allowed to de-compile, dis-assemble, de-obfuscate or otherwise try to get source code back from executable code. There are a few caveats around that prohibition but there isn’t an “out” for “unless you are looking for security vulnerabilities in which case, no problem-o, mon!” If you are trying to get the code in a different form from the way we shipped it to you – as in, the way we wrote it before we did something to it to get it in the form you are executing, you are probably reverse engineering. Don’t. Just – don’t. Q. What is Oracle’s policy in regards to the submission of security vulnerabilities (found by tools or not)? A. We require customers to open a service request (one per vulnerability) and provide a test case to verify that the alleged vulnerability is exploitable. The purpose of this policy is to try to weed out the very large number of inaccurate findings by security tools (false positives). Q. Why are you going after consultants the customer hired? The consultant didn’t sign the license agreement! A. The customer signed the Oracle license agreement, and the consultant hired by the customer is thus bound by the customer’s signed license agreement. Otherwise everyone would hire a consultant to say (legal terms follow) “Nanny, nanny boo boo, big bad consultant can do X even if the customer can’t!” Q. What does Oracle do if there is an actual security vulnerability? A. I almost hate to answer this question because I want to reiterate that customers Should Not and Must Not reverse engineer our code. However, if there is an actual security vulnerability, we will fix it. We may not like how it was found but we aren’t going to ignore a real problem – that would be a disservice to our customers. We will, however, fix it to protect all our customers, meaning everybody will get the fix at the same time. However, we will not give a customer reporting such an issue (that they found through reverse engineering) a special (one-off) patch for the problem. We will also not provide credit in any advisories we might issue. You can’t really expect us to say “thank you for breaking the license agreement.” Q. But the tools that decompile products are getting better and easier to use, so reverse engineering will be OK in the future, right? A. Ah, no. The point of our prohibition against reverse engineering is intellectual property protection, not “how can we cleverly prevent customers from finding security vulnerabilities – bwahahahaha – so we never have to fix them – bwahahahaha.” Customers are welcome to use tools that operate on executable code but that do not reverse engineer code. To that point, customers using a third party tool or service offering would be well-served by asking questions of the tool (or tool service) provider as to a) how their tool works and b) whether they perform reverse engineering to “do what they do.” An ounce of discussion is worth a pound of “no we didn’t,” “yes you did,” “didn’t,” “did” arguments. * Q. “But I hired a really cool code consultant/third party code scanner/whatever. Why won’t mean old Oracle accept my scan results and analyze all 400 pages of the scan report?” A. Hoo-boy. I think I have repeated this so much it should be a song chorus in a really annoying hip hop piece but here goes: Oracle runs static analysis tools ourselves (heck, we make them), many of these goldurn tools are ridiculously inaccurate (sometimes the false positive rate is 100% or close to it), running a tool is nothing, the ability to analyze results is everything, and so on and so forth. We put the burden on customers or their consultants to prove there is a There, There because otherwise, we waste a boatload of time analyzing – nothing** – when we could be spending those resources, say, fixing actual security vulnerabilities. Q. But one of the issues I found was an actual security vulnerability so that justifies reverse engineering, right? A. Sigh. At the risk of being repetitive, no, it doesn’t, just like you can’t break into a house because someone left a window or door unlocked. I’d like to tell you that we run every tool ever developed against every line of code we ever wrote, but that’s not true. We do require development teams (on premises, cloud and internal development organizations) to use security vulnerability-finding tools, we’ve had a significant uptick in tools usage over the last few years (our metrics show this) and we do track tools usage as part of Oracle Software Security Assurance program. We beat up – I mean, “require” – development teams to use tools because it is very much in our interests (and customers’ interests) to find and fix problems earlier rather than later. That said, no tool finds everything. No two tools find everything. We don’t claim to find everything. That fact still doesn’t justify a customer reverse engineering our code to attempt to find vulnerabilities, especially when the key to whether a suspected vulnerability is an actual vulnerability is the capability to analyze the actual source code, which – frankly – hardly any third party will be able to do, another reason not to accept random scan reports that resulted from reverse engineering at face value, as if we needed one. Q. Hey, I’ve got an idea, why not do a bug bounty? Pay third parties to find this stuff! A. Bug bounties are the new boy band (nicely alliterative, no?) Many companies are screaming, fainting, and throwing underwear at security researchers**** to find problems in their code and insisting that This Is The Way, Walk In It: if you are not doing bug bounties, your code isn’t secure. Ah, well, we find 87% of security vulnerabilities ourselves, security researchers find about 3% and the rest are found by customers. (Small digression: I was busting my buttons today when I found out that a well-known security researcher in a particular area of technology reported a bunch of alleged security issues to us except – we had already found all of them and we were already working on or had fixes. Woo hoo!) I am not dissing bug bounties, just noting that on a strictly economic basis, why would I throw a lot of money at 3% of the problem (and without learning lessons from what you find, it really is “whack a code mole”) when I could spend that money on better prevention like, oh, hiring another employee to do ethical hacking, who could develop a really good tool we use to automate finding certain types of issues, and so on. This is one of those “full immersion baptism” or “sprinkle water over the forehead” issues – we will allow for different religious traditions and do it OUR way – and others can do it THEIR way. Pax vobiscum. Q. If you don’t let customers reverse engineer code, they won’t buy anything else from you. A. I actually heard this from a customer. It was ironic because in order for them to buy more products from us (or use a cloud service offering), they’d have to sign – a license agreement! With the same terms that the customer had already admitted violating. “Honey, if you won’t let me cheat on you again, our marriage is through.” “Ah, er, you already violated the ‘forsaking all others’ part of the marriage vow so I think the marriage is already over.” The better discussion to have with a customer —and I always offer this — is for us to explain what we do to build assurance into our products, including how we use vulnerability finding tools. I want customers to have confidence in our products and services, not just drop a letter on them. Q. Surely the bad guys and some nations do reverse engineer Oracle’s code and don’t care about your licensing agreement, so why would you try to restrict the behavior of customers with good motives? A. Oracle’s license agreement exists to protect our intellectual property. “Good motives” – and given the errata of third party attempts to scan code the quotation marks are quite apropos – are not an acceptable excuse for violating an agreement willingly entered into. Any more than “but everybody else is cheating on his or her spouse” is an acceptable excuse for violating “forsaking all others” if you said it in front of witnesses. At this point, I think I am beating a dead – or should I say, decompiled – horse. We ask that customers not reverse engineer our code to find suspected security issues: we have source code, we run tools against the source code (as well as against executable code), it’s actually our job to do that, we don’t need or want a customer or random third party to reverse engineer our code to find security vulnerabilities. And last, but really first, the Oracle license agreement prohibits it. Please don’t go there. * I suspect at least part of the anger of customers in these back-and-forth discussions is because the customer had already paid a security consultant to do the work. They are angry with us for having been sold a bill of goods by their consultant (where the consultant broke the license agreement). ** The only analogy I can come up with is – my bookshelf. Someone convinced that I had a prurient interest in pornography could look at the titles on my bookshelf, conclude they are salacious, and demand an explanation from me as to why I have a collection of steamy books. For example (these are all real titles on my shelf): Thunder Below! (“whoo boy, must be hot stuff!”) Naked Economics (“nude Keynesians!”)*** Inferno (“even hotter stuff!”) At Dawn We Slept (“you must be exhausted from your, ah, nighttime activities…”) My response is that I don’t have to explain my book tastes or respond to baseless FUD. (If anybody is interested, the actual book subjects are, in order, 1) the exploits of WWII submarine skipper and Congressional Medal of Honor recipient CAPT Eugene Fluckey, USN 2) a book on economics 3) a book about the European theater in WWII and 4) the definitive work concerning the attack on Pearl Harbor. ) *** Absolutely not, I loathe Keynes. There are more extant dodos than actual Keynesian multipliers. Although “dodos” and “true believers in Keynesian multipliers” are interchangeable terms as far as I am concerned. **** I might be exaggerating here. But maybe not.


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Massive leak reveals Hacking Team’s most private moments in messy detail

http://arstechnica.com/security/2015/07/massive-leak-reveals-hacking-teams-most-private-moments-in-messy-detail/ By Dan Goodin Ars Technica July 6, 2015 Privacy and human rights advocates are having a field day picking through a massive leak purporting to show spyware developer Hacking Team’s most candid moments, including documents that appear to contradict the company’s carefully scripted PR campaign. “Imagine this: a leak on WikiLeaks showing YOU explaining the evilest technology on earth! :-),” Hacking Team CEO David Vincenzetti wrote in a June 8 e-mail to company employees including Walter Furlan, whose LinkedIn profile lists him as the international sales engineer of the spyware developer. “You would be demonized by our dearest friends the activists, and normal people would point their fingers at you.” Other documents suggested the US FBI was among the customers paying for software that allowed targets to be surreptitiously surveilled as they used computers or smartphones. According to one spreadsheet first reported by Wired, the FBI paid Hacking Team more than $773,226.64 since 2011 for services related to the Hacking Team product known as “Remote Control Service,” which is also marketed under the name “Galileo.” One spreadsheet column listed simply as “Exploit” is marked “yes” for a sale in 2012, an indication Hacking Group may have bundled some sort of attack code that remotely hijacked targets’ computers or phones. Previously, the FBI has been known to have wielded a Firefox exploit to decloak child pornography suspects using Tor. Security researchers have also scoured leaked Hacking Team source code for suspicious behavior. Among the findings, the embedding of references to child porn in code related to the Galileo. […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Evil Wi-Fi captive portal could spoof Apple Pay to get users’ credit card data

http://arstechnica.com/security/2015/06/evil-wifi-captive-portal-could-fool-users-into-giving-up-apple-pay-data/ By Sean Gallagher Ars Technica June 4, 2015 Researchers at Wandera, a mobile security company, have alerted Apple to a potential security vulnerability in iOS that could be used by attackers to fool users into giving up their credit card data and personal information. The vulnerability, based on the default behavior of iOS devices with Wi-Fi turned on, could be used to inject a fake “captive portal” page that imitates the Apple Pay interface. The attack leverages a well-known issue Ars has reported on in the past: iOS devices with Wi-Fi turned on will attempt by default to connect to any access point with a known SSID. Those SSIDs are broadcast by “probe” messages from the device whenever it’s not connected to a network. A rogue access point could use a probe request capture to masquerade as a known network, and then throw up a pop-up screen masquerading as any web page or app. The Wandera attack uses this behavior to get a mobile device to connect and then presents a pop-up portal page—the type usually used when connecting to a public WiFi service to present a Web-based login screen—that is designed to resemble an Apple Pay screen for entering credit card data. The attack could be launched by someone nearby a customer who has just completed or is conducting an Apple Pay transaction so that the user is fooled into believing Apple Pay itself is requesting that credit card data is reentered. An attacker could loiter near a point-of-sale system with an Apple Pay terminal and continuously launch the attack. […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail