Tag Archives: value

[ISN] CarolinaCon-12 – March 2016 – FINAL ANNOUNCEMENT

Forwarded from: Vic Vandal CarolinaCon-12 will be held on March 4th-6th, 2016 in Raleigh NC. For the cheap price of $40 YOU could get a full weekend of talks, hacks, contests, and parties. Regarding the price increase to $40, it was forced due to ever-rising venue costs. But we promise to provide more value via; great talks, great side events, kickass new attendee badges, cool giveaways, etc. We’ve selected as many presentations as we can fit into the lineup. Here they are, in no particular order: – Mo Money Mo Problems: The Cashout – Benjamin Brown – Breaking Android apps for fun and profit – Bill Sempf – Gettin’ Vishy with it – Owen / Snide- @LinuxBlog – Buffer Overflows for x86, x86_64 and ARM – John F. Davis (Math 400) – Surprise! Everything can kill you. – fort – Advanced Reconnaissance Framework – Solray – Introducing PS>Attack, a portable PowerShell attack toolkit – Jared Haight – Reverse Engineer iOS apps because reasons – twinlol – FLOSS every day – automatically extracting obfuscated strings from malware – Moritz Raabe and William Ballenthin – John the Ripper sits in the next cubicle: Cracking passwords in a Corporate environment – Steve Passino – Dynamic Analysis with Windows Performance Toolkit – DeBuG (John deGruyter) – Deploying a Shadow Threat Intel Capability: Understanding YOUR Adversaries without Expensive Security Tools – grecs – AR Hacking: How to turn One Gun Into Five Guns – Deviant Ollam – Reporting for Hackers – Jon Molesa @th3mojo – Never Go Full Spectrum – Cyber Randy – I Am The Liquor – Jim Lahey CarolinaCon-12 Contests/Challenges/Events: – Capture The Flag – Crypto Challenge – Lockpicking Village – Hardware Hack-Shop – Hacker Trivia – Unofficial CC Shootout LODGING: If you’re traveling and wish to stay at the Con hotel here is the direct link to the CarolinaCon discount group rate: www.hilton.com/en/hi/groups/personalized/R/RDUNHHF-CCC-20160303/index.jhtml NOTE: The website defaults to March 3rd-6th instead of March 4th-6th and the group rate is no longer available on March 3rd. So make sure that you change the reservation dates to get the group rate. ATTENTION: The discount group rate on Hilton hotel rooms expires THIS weekend on JANUARY 31st 2016, so act quickly if you plan on staying at the hotel for all of the weekend fun and you want the group rate. CarolinaCon formal proceedings/talks will run; – 7pm to 11pm on Friday – 10am to 9pm on Saturday – 10am to 4pm on Sunday For presentation abstracts, speaker bios, the final schedule, side event information, and all the other exciting details (as they develop and as our webmaster gets to them) stay tuned to: www.carolinacon.org ADVERTISERS / VENDORS / SPONSORS: There are no advertisers, vendors, or sponsors allowed at CarolinaCon….ever. Please don’t waste your time or ours in asking. CarolinaCon has been Rated “M” for Mature. Peace, Vic




Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] U.S. government wants in on the public cloud, but needs more transparency

www.computerworld.com/article/3006360/security/us-government-wants-in-on-the-public-cloud-but-needs-more-transparency.html By Blair Hanley Frank IDG News Service Nov 18, 2015 The federal government is trying to move more into the cloud, but service providers’ lack of transparency is harming adoption, according to Arlette Hart, the FBI’s chief information security officer. “There’s a big piece of cloud that’s the ‘trust me’ model of cloud computing,” she said during an on-stage interview at the Structure conference in San Francisco on Wednesday. That’s a tough sell for organizations like the federal government that have to worry about protecting important data. While Hart said that the federal government wants to get at the “enormous value” in public cloud infrastructure, its interest in moving to public cloud infrastructure is also tied to a need for greater security. While major providers like Amazon and Microsoft offer tools that meet the U.S. government’s regulations, not every cloud provider is set up along those lines. In Hart’s view, cloud providers need to be more transparent about what they do with security so the government and other customers can verify that their practices are sufficient for protecting data. […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Real-life James Bonds: Actual spooks reveal what a job in MI6 is really like

www.telegraph.co.uk/culture/film/jamesbond/11874457/Real-life-James-Bonds-Actual-spooks-reveal-what-a-job-in-MI6-is-really-like.html By Frank Gardner telegraph.co.uk 25 Oct 2015 It’s slick, it’s fast-paced and it’s sexy. But that’s the cinema. SPECTRE, the latest James Bond thriller starring Daniel Craig opens in cinemas on Monday to critical acclaim. Pure fantasy? Or are there any similarities with the work of a real-life operative in Britain’s Secret Intelligence Service (SIS), better known as MI6? I’ve gone to meet two serving SIS officers to find out. I don’t notice them at first, there are so many people in the room. Are they part of the camera crew? A couple of people sent up from hotel reception perhaps, to check we have everything we need? But then we are introduced. “Kamal” – and I’m going to go out on a limb here and guess that is probably not his real name – is 30-something, unshaven, quietly confident. “Kirsty” is only slightly older. Neatly dressed, she looks like she could be running a medium-sized IT company. In fact, she is in recruiting, having already done the hard yards in the field overseas. Kamal speaks first. “I’m what people would classify as an agent-runner,” he tells me. “Our job is to find individuals with access to secret intelligence of value to the UK government. My job [within MI6] is to build a relationship with these individuals and work with them to obtain the secrets they have access to, securely.” And bang, up in smoke goes one of the biggest misnomers about espionage and spies. James Bond, and all the true-life men and women who work inside those sandstone and emerald-coloured headquarters at Vauxhall Cross on the banks of the Thames are not “secret agents”. They are intelligence officers. The people overseas who they persuade to spy for them are the actual agents. […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Salted Hash: Live from DerbyCon 5.0 (Day 1)

http://www.csoonline.com/article/2986763/security-awareness/salted-hash-live-from-derbycon-5-0-day-1.html By Steve Ragan Salted Hash CSO Online Sept 25, 2015 DerbyCon 5.0 has officially started, and it didn’t take long before the halls were flooded with hackers looking to catch-up with their peers as they headed to the first talk of the day. On Thursday, I had the chance to catch-up with a number of people who resonated with the thought process of yesterday’s post. The point being, insider threats aren’t what you think they are, and the core issue isn’t a malicious user – it’s a clueless user. In addition, when dealing with insider-based issues, policies that prohibit or hinder workflow will create more problems than they solve. Today, the topic is threat intelligence. I learned something interesting recently, if you gather a group of hackers and researchers around a table and ask them to define threat intelligence, the conversation will quickly spins into a rage fueled discussion about sales-driven security (meaning InfoSec products that are pitched and sold with no real security value). […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Guilty Plea in Morgan Stanley Insider Breach

http://www.bankinfosecurity.com/guilty-plea-in-morgan-stanley-insider-breach-a-8546 By Tracy Kitten @FraudBlogger Bank Info Security September 22, 2015 A former wealth management adviser at Morgan Stanley pleaded guilty this week to stealing confidential information linked to more than 700,000 client accounts over a period of several years. Some fraud-prevention experts say the investment banking firm could have taken steps to detect the suspicious insider activity sooner. Galen Marsh, who worked for the firm’s Manhattan office until he was fired in January 2015, told the U.S. District Court for the Southern District of New York on Sept. 21 that he illegally accessed account holders’ names, addresses and other personal information, along with investment values and earnings, from computer systems used by Morgan Stanley to manage confidential data, according to court records. Between June 2011 and December 2014, Marsh conducted nearly 6,000 unauthorized searches of confidential client information and then uploaded the information on 730,000 clients to a server at his home in New Jersey, the court documents show. […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] No, You Really Can’t

https://blogs.oracle.com/maryanndavidson/entry/no_you_really_can_t Mary Ann Davidson Blog By User701213-Oracle Aug 10, 2015 I have been doing a lot of writing recently. Some of my writing has been with my sister, with whom I write murder mysteries using the nom-de-plume Maddi Davidson. Recently, we’ve been working on short stories, developing a lot of fun new ideas for dispatching people (literarily speaking, though I think about practical applications occasionally when someone tailgates me). Writing mysteries is a lot more fun than the other type of writing I’ve been doing. Recently, I have seen a large-ish uptick in customers reverse engineering our code to attempt to find security vulnerabilities in it. This is why I’ve been writing a lot of letters to customers that start with “hi, howzit, aloha” but end with “please comply with your license agreement and stop reverse engineering our code, already.” I can understand that in a world where it seems almost every day someone else had a data breach and lost umpteen gazillion records to unnamed intruders who may have been working at the behest of a hostile nation-state, people want to go the extra mile to secure their systems. That said, you would think that before gearing up to run that extra mile, customers would already have ensured they’ve identified their critical systems, encrypted sensitive data, applied all relevant patches, be on a supported product release, use tools to ensure configurations are locked down – in short, the usual security hygiene – before they attempt to find zero day vulnerabilities in the products they are using. And in fact, there are a lot of data breaches that would be prevented by doing all that stuff, as unsexy as it is, instead of hyperventilating that the Big Bad Advanced Persistent Threat using a zero-day is out to get me! Whether you are running your own IT show or a cloud provider is running it for you, there are a host of good security practices that are well worth doing. Even if you want to have reasonable certainty that suppliers take reasonable care in how they build their products – and there is so much more to assurance than running a scanning tool – there are a lot of things a customer can do like, gosh, actually talking to suppliers about their assurance programs or checking certifications for products for which there are Good Housekeeping seals for (or “good code” seals) like Common Criteria certifications or FIPS-140 certifications. Most vendors – at least, most of the large-ish ones I know – have fairly robust assurance programs now (we know this because we all compare notes at conferences). That’s all well and good, is appropriate customer due diligence and stops well short of “hey, I think I will do the vendor’s job for him/her/it and look for problems in source code myself,” even though: A customer can’t analyze the code to see whether there is a control that prevents the attack the scanning tool is screaming about (which is most likely a false positive) A customer can’t produce a patch for the problem – only the vendor can do that A customer is almost certainly violating the license agreement by using a tool that does static analysis (which operates against source code) I should state at the outset that in some cases I think the customers doing reverse engineering are not always aware of what is happening because the actual work is being done by a consultant, who runs a tool that reverse engineers the code, gets a big fat printout, drops it on the customer, who then sends it to us. Now, I should note that we don’t just accept scan reports as “proof that there is a there, there,” in part because whether you are talking static or dynamic analysis, a scan report is not proof of an actual vulnerability. Often, they are not much more than a pile of steaming … FUD. (That is what I planned on saying all along: FUD.) This is why we require customers to log a service request for each alleged issue (not just hand us a report) and provide a proof of concept (which some tools can generate). If we determine as part of our analysis that scan results could only have come from reverse engineering (in at least one case, because the report said, cleverly enough, “static analysis of Oracle XXXXXX”), we send a letter to the sinning customer, and a different letter to the sinning consultant-acting-on-customer’s behalf – reminding them of the terms of the Oracle license agreement that preclude reverse engineering, So Please Stop It Already. (In legalese, of course. The Oracle license agreement has a provision such as: “Customer may not reverse engineer, disassemble, decompile, or otherwise attempt to derive the source code of the Programs…” which we quote in our missive to the customer.) Oh, and we require customers/consultants to destroy the results of such reverse engineering and confirm they have done so. Why am I bringing this up? The main reason is that, when I see a spike in X, I try to get ahead of it. I don’t want more rounds of “you broke the license agreement,” “no, we didn’t,” yes, you did,” “no, we didn’t.” I’d rather spend my time, and my team’s time, working on helping development improve our code than argue with people about where the license agreement lines are. Now is a good time to reiterate that I’m not beating people up over this merely because of the license agreement. More like, “I do not need you to analyze the code since we already do that, it’s our job to do that, we are pretty good at it, we can – unlike a third party or a tool – actually analyze the code to determine what’s happening and at any rate most of these tools have a close to 100% false positive rate so please do not waste our time on reporting little green men in our code.” I am not running away from our responsibilities to customers, merely trying to avoid a painful, annoying, and mutually-time wasting exercise. For this reason, I want to explain what Oracle’s purpose is in enforcing our license agreement (as it pertains to reverse engineering) and, in a reasonably precise yet hand-wavy way, explain “where the line is you can’t cross or you will get a strongly-worded letter from us.” Caveat: I am not a lawyer, even if I can use words like stare decisis in random conversations. (Except with my dog, because he only understands Hawaiian, not Latin.) Ergo, when in doubt, refer to your Oracle license agreement, which trumps anything I say herein! With that in mind, a few FAQ-ish explanations: Q. What is reverse engineering? A. Generally, our code is shipped in compiled (executable) form (yes, I know that some code is interpreted). Customers get code that runs, not the code “as written.” That is for multiple reasons such as users generally only need to run code, not understand how it all gets put together, and the fact that our source code is highly valuable intellectual property (which is why we have a lot of restrictions on who accesses it and protections around it). The Oracle license agreement limits what you can do with the as-shipped code and that limitation includes the fact that you aren’t allowed to de-compile, dis-assemble, de-obfuscate or otherwise try to get source code back from executable code. There are a few caveats around that prohibition but there isn’t an “out” for “unless you are looking for security vulnerabilities in which case, no problem-o, mon!” If you are trying to get the code in a different form from the way we shipped it to you – as in, the way we wrote it before we did something to it to get it in the form you are executing, you are probably reverse engineering. Don’t. Just – don’t. Q. What is Oracle’s policy in regards to the submission of security vulnerabilities (found by tools or not)? A. We require customers to open a service request (one per vulnerability) and provide a test case to verify that the alleged vulnerability is exploitable. The purpose of this policy is to try to weed out the very large number of inaccurate findings by security tools (false positives). Q. Why are you going after consultants the customer hired? The consultant didn’t sign the license agreement! A. The customer signed the Oracle license agreement, and the consultant hired by the customer is thus bound by the customer’s signed license agreement. Otherwise everyone would hire a consultant to say (legal terms follow) “Nanny, nanny boo boo, big bad consultant can do X even if the customer can’t!” Q. What does Oracle do if there is an actual security vulnerability? A. I almost hate to answer this question because I want to reiterate that customers Should Not and Must Not reverse engineer our code. However, if there is an actual security vulnerability, we will fix it. We may not like how it was found but we aren’t going to ignore a real problem – that would be a disservice to our customers. We will, however, fix it to protect all our customers, meaning everybody will get the fix at the same time. However, we will not give a customer reporting such an issue (that they found through reverse engineering) a special (one-off) patch for the problem. We will also not provide credit in any advisories we might issue. You can’t really expect us to say “thank you for breaking the license agreement.” Q. But the tools that decompile products are getting better and easier to use, so reverse engineering will be OK in the future, right? A. Ah, no. The point of our prohibition against reverse engineering is intellectual property protection, not “how can we cleverly prevent customers from finding security vulnerabilities – bwahahahaha – so we never have to fix them – bwahahahaha.” Customers are welcome to use tools that operate on executable code but that do not reverse engineer code. To that point, customers using a third party tool or service offering would be well-served by asking questions of the tool (or tool service) provider as to a) how their tool works and b) whether they perform reverse engineering to “do what they do.” An ounce of discussion is worth a pound of “no we didn’t,” “yes you did,” “didn’t,” “did” arguments. * Q. “But I hired a really cool code consultant/third party code scanner/whatever. Why won’t mean old Oracle accept my scan results and analyze all 400 pages of the scan report?” A. Hoo-boy. I think I have repeated this so much it should be a song chorus in a really annoying hip hop piece but here goes: Oracle runs static analysis tools ourselves (heck, we make them), many of these goldurn tools are ridiculously inaccurate (sometimes the false positive rate is 100% or close to it), running a tool is nothing, the ability to analyze results is everything, and so on and so forth. We put the burden on customers or their consultants to prove there is a There, There because otherwise, we waste a boatload of time analyzing – nothing** – when we could be spending those resources, say, fixing actual security vulnerabilities. Q. But one of the issues I found was an actual security vulnerability so that justifies reverse engineering, right? A. Sigh. At the risk of being repetitive, no, it doesn’t, just like you can’t break into a house because someone left a window or door unlocked. I’d like to tell you that we run every tool ever developed against every line of code we ever wrote, but that’s not true. We do require development teams (on premises, cloud and internal development organizations) to use security vulnerability-finding tools, we’ve had a significant uptick in tools usage over the last few years (our metrics show this) and we do track tools usage as part of Oracle Software Security Assurance program. We beat up – I mean, “require” – development teams to use tools because it is very much in our interests (and customers’ interests) to find and fix problems earlier rather than later. That said, no tool finds everything. No two tools find everything. We don’t claim to find everything. That fact still doesn’t justify a customer reverse engineering our code to attempt to find vulnerabilities, especially when the key to whether a suspected vulnerability is an actual vulnerability is the capability to analyze the actual source code, which – frankly – hardly any third party will be able to do, another reason not to accept random scan reports that resulted from reverse engineering at face value, as if we needed one. Q. Hey, I’ve got an idea, why not do a bug bounty? Pay third parties to find this stuff! A. Bug bounties are the new boy band (nicely alliterative, no?) Many companies are screaming, fainting, and throwing underwear at security researchers**** to find problems in their code and insisting that This Is The Way, Walk In It: if you are not doing bug bounties, your code isn’t secure. Ah, well, we find 87% of security vulnerabilities ourselves, security researchers find about 3% and the rest are found by customers. (Small digression: I was busting my buttons today when I found out that a well-known security researcher in a particular area of technology reported a bunch of alleged security issues to us except – we had already found all of them and we were already working on or had fixes. Woo hoo!) I am not dissing bug bounties, just noting that on a strictly economic basis, why would I throw a lot of money at 3% of the problem (and without learning lessons from what you find, it really is “whack a code mole”) when I could spend that money on better prevention like, oh, hiring another employee to do ethical hacking, who could develop a really good tool we use to automate finding certain types of issues, and so on. This is one of those “full immersion baptism” or “sprinkle water over the forehead” issues – we will allow for different religious traditions and do it OUR way – and others can do it THEIR way. Pax vobiscum. Q. If you don’t let customers reverse engineer code, they won’t buy anything else from you. A. I actually heard this from a customer. It was ironic because in order for them to buy more products from us (or use a cloud service offering), they’d have to sign – a license agreement! With the same terms that the customer had already admitted violating. “Honey, if you won’t let me cheat on you again, our marriage is through.” “Ah, er, you already violated the ‘forsaking all others’ part of the marriage vow so I think the marriage is already over.” The better discussion to have with a customer —and I always offer this — is for us to explain what we do to build assurance into our products, including how we use vulnerability finding tools. I want customers to have confidence in our products and services, not just drop a letter on them. Q. Surely the bad guys and some nations do reverse engineer Oracle’s code and don’t care about your licensing agreement, so why would you try to restrict the behavior of customers with good motives? A. Oracle’s license agreement exists to protect our intellectual property. “Good motives” – and given the errata of third party attempts to scan code the quotation marks are quite apropos – are not an acceptable excuse for violating an agreement willingly entered into. Any more than “but everybody else is cheating on his or her spouse” is an acceptable excuse for violating “forsaking all others” if you said it in front of witnesses. At this point, I think I am beating a dead – or should I say, decompiled – horse. We ask that customers not reverse engineer our code to find suspected security issues: we have source code, we run tools against the source code (as well as against executable code), it’s actually our job to do that, we don’t need or want a customer or random third party to reverse engineer our code to find security vulnerabilities. And last, but really first, the Oracle license agreement prohibits it. Please don’t go there. * I suspect at least part of the anger of customers in these back-and-forth discussions is because the customer had already paid a security consultant to do the work. They are angry with us for having been sold a bill of goods by their consultant (where the consultant broke the license agreement). ** The only analogy I can come up with is – my bookshelf. Someone convinced that I had a prurient interest in pornography could look at the titles on my bookshelf, conclude they are salacious, and demand an explanation from me as to why I have a collection of steamy books. For example (these are all real titles on my shelf): Thunder Below! (“whoo boy, must be hot stuff!”) Naked Economics (“nude Keynesians!”)*** Inferno (“even hotter stuff!”) At Dawn We Slept (“you must be exhausted from your, ah, nighttime activities…”) My response is that I don’t have to explain my book tastes or respond to baseless FUD. (If anybody is interested, the actual book subjects are, in order, 1) the exploits of WWII submarine skipper and Congressional Medal of Honor recipient CAPT Eugene Fluckey, USN 2) a book on economics 3) a book about the European theater in WWII and 4) the definitive work concerning the attack on Pearl Harbor. ) *** Absolutely not, I loathe Keynes. There are more extant dodos than actual Keynesian multipliers. Although “dodos” and “true believers in Keynesian multipliers” are interchangeable terms as far as I am concerned. **** I might be exaggerating here. But maybe not.


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Cybercom: Big Data Theft at OPM, Private Networks is New Trend in Cyber Attacks

http://freebeacon.com/national-security/cybercom-big-data-theft-at-opm-private-networks-is-new-trend-in-cyber-attacks/ By Bill Gertz Washington Free Beacon July 27, 2015 The commander of U.S. Cyber Command said last week that the Office of Personnel Management hack of millions of records of federal workers shows a new trend toward using Big Data analytics for both nation-state and criminal cyber attacks. “One of the lessons from OPM for me is we need to recognize that increasingly data has a value all its own and that there are people actively out there interested in acquiring data in volumes and numbers that we didn’t see before,” said Adm. Mike Rogers, the Cyber Command commander and also director of the National Security Agency. The theft of 22.1 million federal records, including sensitive background information on millions of security clearance holders, will assist foreign nations in conducting future cyber attacks through so-called “spear-phishing,” Rogers said, declining to name China as the nation state behind the OPM hacks. Additionally, China is suspected in the hack uncovered in February of 80 million medical records of the health care provider Anthem, which would have given it access to valuable personal intelligence that can be used to identify foreign spies and conduct additional cyber attacks. […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Mad John McAfee: ‘Can you live in a society that is more paranoid than I’m supposed to be?’

http://www.theregister.co.uk/2015/06/04/mad_mcafee/ By Alexander J Martin The Register 4 June 2015 Infosec 2015 – John McAfee delivered a surprisingly non-controversial keynote speech to the London Infosec Conference on Wednesday afternoon, lauding the value of privacy, doing so – to the concern of his bewildered audience – whilst seemingly tickling himself through the cloth of his pocket. McAfee’s talk was essentially a rant against governments’ security-compromising activities, summed up by his statement: “We cannot allow a fearful government to create weaknesses in the very software we are trying to protect. By putting backdoors in the software, we have given hackers the access we are trying to prevent.” Easily the rockstar of infosec, McAfee took to the stage fashionably late – though his audience had remained comfortable, being plied with free alcohol, free food and an enjoyable musical set (wasted on Infosec’s more senior attendees) during their wait. The man himself, a young 70-year-old in a handsome navy suit, looking and seeming much like a millionaire version of Matthew McConaughey’s Rust Cohle, was quick to address what he regarded as the major political influences upon security and explicitly criticised governments’ notions of backdooring software. A strong approach to a conference which has always had plenty of government security bods attending. “Take control of your lives,” McAfee urged Infosec. “Say ‘I am going to be responsible for myself, at least to some extent.’ Governments cannot protect you.” […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail