ZeroDay on Webmin What Does That Mean?

First of all one needs to know what a ZeroDay means, as well as webmin.

Webmin is easier to explain.  If you go to webmin.com then this explanation:  “Webmin is a web-based interface for system administration for Unix. Using any modern web browser, you can setup user accounts, Apache, DNS, file sharing and much more. ”  are the first 2 sentences.

Yes but what does it mean?

Here is the configuration page:

So webmin is software that allows a system administrator to more easily administer Websites, DNS configuration, file sharing, and more. In short it makes it easier to administer and run a Unix or Linux server.

 

So many Unix(or Linux) systems run this Webmin software to make life easier for the IT person. But then there along comes a Zero Day just like many before this one, Oversitesentry 12/15/15 post.

Belkin router zero day blogpost from 11/8/14

 

Fireeye and Kaspersky software hit with Zero day blogpost 9/8/15

 

Lastpass password manager ZeroDay flaw blogpost 07/27/16   

So as you see this is a recurring theme for all kinds software, including security software. Or administrative software like Webmin?

Zero day means that there is a vulnerability out there that can hack your computer AND there is NO patch to  fix it.

Check out this image:

 

 


It shows how after a vulnerability is introduced(t-v) and the exploit is released in the wild(t-e), now we have a Zero Day vulnerability. At this point an exploit can hack the software with anyone that runs exploit code and the infrastructure to make money (like ransomware).  So these Unix and Linux machines that have Webmin admin software are now vulnerable until Webmin can create a patch(t-p). Then once the patch is released is the administrator has to install the patch.

 

How long will it take for the patch to be released and installed? sometimes it is 30 days, and sometimes 60 or longer.

Let me know if you need help discussing this.

 

 

 

 

What I got out of BlackHat and DEFCON

First I must say I did not go to Las Vegas, all I did is hunt the Internet for pieces of information and did not copy completely,  but edited to make it easier to understand when reading only (versus giving presentation within the hall):

“Controlled Chaos” the Inevitable Marriage of DevOps & Security   (Kelly Shortridge and Nicole Forsgren)  is an interesting and thought provoking presentation.

This presentation is listed at this page: https://www.blackhat.com/us-19/briefings/schedule/

Here is the relevant information in the presentation:

What are the principles of chaotic security engineering?

  1. Expect that security controls will fail & prepare accordingly
  2. Don’t try to avoid incidents – hone your ability to respond to them
  3. What are the benefits of the chaos/ resilience approach?

Time to D.I.E. instead of the C.I.A. triad, which is commonly used as a model to balance infosec priorities.

CIA first – Confidentiality – Integrity -Availability

Confidentiality: Withhold info from people unauthorized to view it.

Integrity: Data is a trustworthy representation of the original info.

Availability: Organization’s services are available to end users

But these are security values, not qualities that create security. Thus we need a model promoting qualities that make systems more secure.

D.I.E. model: Distributed, Immutable, Ephemeral

Distributed: Multiple systems supporting the same overarching goal.  This model reduces DOS attacks by design.

Immutable: Infrastructure that doesn’t change after it’s deployed and servers are now disposable “cattle” rather than cherished “pets”. The infrastructure is more secure by design – ban shell access entirely and although lack of control is scary, unlimited lives are better than nightmare mode.

Ephemeral: Infrastructure with a very short lifespan(dies after task). Where ephemerality creates uncertainty for attackers (persistence=nightmare). I.e. installing a rootkit on a resource that dies in minutes is a wasted effort.

Optimize for D.I.E. reduce your risk by design and support resilience

So what metrics are important in resilient security engineering?

TTR is equally as important for infosec as it is for DevOps.

Time Between Failure(TBF) will lead your infosec program astray.

Extended downtime is bad (makes users sad) not more frequent but trivial blips.

Prioritizing failure inhibits innovation

Instead, harness failure as a tool to help you prepare for the inevitable

TTR>TTD – who cares if you detect quickly if you don’t fix it?

Determine the attacker’s least-cost path (hint: does not involve 0day)

Architecting Chaos

 

Begin with ‘dumb’ testing before moving to ‘fancy’ testing

  • Controlling Chaos: Availability
  • Existing tools should cover availability
  • turning security events into availability events appeals to DevOps
    • Tools: chaos Monkey, Azure fault analysis, Chaos-Lambda, Kube-monkey, PowerfulSeal, Podreaper, Pumba, Blockade

 

  • Controlling Chaos: Confidentiality
  • microservices use multiple layers of auth that preserve confidentiality
  • A service mesh is like an on-demand VPN at the application level
  • Attackers are forced to escalate privileges to access the iptables layer
  • Test by injecting failure into your service mesh to test authentication controls

 

  • Controlling Chaos: Integrity
  • Test by swapping out certs in your ZTNs all transactions should fail
  • Test modified encrypted data and see if your FIM alerts on it.

 

  • Controlling Chaos: Distributed
  • Distributed overlaps with availability in context of infrastructure
  • Multi-region services present a fun opportunity to mess with attackers
  • Shuffle IP blocks regularly to change attackers’ lateral movement

 

  • Controlling Chaos: Immutable
  • Immutable infrastructure is like a phoenix – it disappears and comes back
  • Volatile environments with continually moving parts raise the cost of attack
  • Create rules like: “If there is a write to disk, crash the node”
  • Attackers must stay in-memory, which hopefully makes them cry
  • Metasploit Meterpreter and webshell: Touch passwords.txt & gone
  • Mark Garbage files as “unreadable” to craft enticing bait for attackers
  • Possible goals: Architect immutability turtles all the way down

 

  • Controlling Chaos: Ephemeral
  • Infosec bugs are stated-related so get rid of state, get rid of bugs
  • Reverse uptime: longer host uptime adds greater security risk
  • Test: change API tokens and test if services still accept old tokens
  • Test: inject hashes of old pieces of data to ensure no data persistence
  • Use “arcade tokens” instead of using direct references to data
  • Leverage lessons from toll fraud – cloud billing becomes security signal
  • Test: exfil TBs or run a cryptominer to inform billing spike detection

How should infosec and DevOps come together and develop all of these concepts?

Has to be done as a cultural “marriage” cultivate buy-in for resilience and chaos engineering.

This is a marathon not a sprint and changing culture : change what people do , not what they think.

———————————————

There are a lot more suggestions, but the main themes that I took out of this presentation slides is that you can make your defense more resilient and tougher by making it a little bit chaotic.  I.e. Immutable and ephemeral are some good concepts to think about and use in your infrastructure. Every environment is different and will require co-ordination and rethinking of how things work, but it is good to work some of the concepts into your environment.

Here is a great piece of thinking: Don’t keep your systems up as long as possible, as it is also a security risk (besides patching and other issues).

Using  short lifespan hardware with frequent rebooting (relatively – like every day for example) makes the attacker’s life much more difficult. Of course patching requires some rebooting, but monthly or quarterly reboots are not frequent enough.

Also here are some links from DEFCON

First the Media presentation  webpages: https://media.defcon.org/DEF%20CON%2027/DEF%20CON%2027%20presentations/

(I always include the full link instead of Media.defcon.org link so one can see where it will go)

First I look at the Speaker’s bio and quick overview of the presentation given at this link: https://www.defcon.org/html/defcon-27/dc-27-speakers.html

Then I download the information freely available on the Internet.  I will have more posts on the presentations at DEFCON and Blackhat.

 

 

Risk Analysis Gone Wrong?

Since a picture says a thousand words here is an attempt at explanation of Risk Analysis.

The rows are “Impact on Environment”: none, minimal, minor, significant, major, critical

The “Likelihood” or “Likely – what is % to happen” is  the columns: not likely, low, medium, medium-high, high, will happen.

These are not “real” systems in anyone’s network, only an example of different CVE (Common Vulnerabilities and Exposures) risks in a hypothetical company.  Although I picked on the IoT systems as the likely weak link (one has to update those camera or ups device software or one can be hacked). IoT systems are a weak link since they are not as easy to upgrade and require upkeep like all systems.

In the past I was trying to explain the weak links with this picture:

The problem is that when a system is hacked it now leaves the whole network with all the critical systems open.

The new image, I am trying to explain if a less important system was hacked (like the IoT vulnerabilities) which means an IoT vulnerability system which is critical but has a medium likely chance to get hacked.

Once hacked this system allows the attacker to review other targets and it may be where systems that have lower CVE’s (3-6) are canvassed and with the right vulnerabilities the hacker will now attack and set up persistent methods to stay in the network. Of course the idea is not to just stay in the network, one wants to  attack valuable targets.

“Such as having a High CVE on less critical systems ” before the final attack on a critical system at the highest level.

The ultimate and worst possible attack is a remote code execution attack, as with a simple attack one can execute an attack on the system. for a hacker it is easily done.

So explaining the attack in total gives one a further and more complete understanding of the ultimate goal . But what is even more important? To now have the ability to assess risk better. Instead of assessing each device separately with each vulnerability now one must assess the impact and likelihood with a total attack in mind.

Which means? The lower vulnerabilities can have higher impacts. How should we account for this phenomenon?

We have to become attackers (even hypothetically) to figure out which system would be nice to have with a lower vulnerability… so that the hypothetical attack  can advance through to the eventual goal.

You might be saying now – that’s all? That is all I have to do ? sort my systems, figure out the vulnerabilities, and then patch them. Well, it is not that easy since life and it’s vacations, sicknesses, labor issues, and other things coming your way. Since the vulnerabilities may come at inopportune times (they do not care if your family has an event). the hacker will hack you at Christmas without batting an eye.  The truth of it is the reasons  why people and companies get hacked is because the vulnerability management  programs do not take into account sickness and vacations. Thus labor is always pushed to ever more difficult situations. There seems to be always a push for cost containment in IT and computer security, since it is assumed all systems should be secure. A cost was not associated with computer security in the past. So this is why many companies lost their cohesion over time and then something happens and the attackers get in.

Once the attacker has a toehold, it is possible to stay undetected for months. In the meantime the patching lifecycle is front and center the reason for many systems getting hacked as well.

Notice that when a vulnerability is found by a researcher it takes many days to actually get a fix for the vulnerability and then it takes yet another few weeks before installing it in your system. It may be 60 days before the  system is safe from attack. So we are in a constant state of risk in our networks.  This is why every month with new vulnerabilities is an important report to view. And this is why we must continually test for any potential weaknesses in the network.

 

Now that you know the full reasons from A to Z it is easier to actually assess risk on systems.

What you need when assessing risk is to review all possible risk and decide what to focus on next.

Contact for more information or to discuss your risk assessment.

Also the latest CapitalOne hack seems to have been a misconfigured cloud configuration, including why is it storing private information in a public cloud?? Cyberscoop discusses this in more detail. The breach response may have been fast, but there was a major failure of architecture.

 

Interesting take on CapitalOne breach from former employee: https://medium.com/cloud-security/whats-in-your-cloud-673c3b4497fd

He says that the configuration was faulty as one IAM (Identity Access management) could be used to access all data (which is a large weak link). I.e. if a hacker can get one account username and password they have all of the data.

The thing to do is to perform threat modeling and review your architecture as well as vulnerability management.

Compliance vs Framework

Is it better to focus on compliance or a on a framework system?

I.e. PCI or HIPAA compliance versus ITIL or COBIT for example.

There are more regulations coming so let’s add a couple of the US based ones. SHIELD(Stop Hacks and Improve Electronic Data Security) and CCPA(California Consumer Privacy Act).

  1. SHIELD – Stop Hacks and Improve Electronic Data Security Act , became law in New York(January 2019). Must adopt “reasonable safeguards to protect security, confidentiality and integrity” of private information.
  2. CCPA – becomes law in January 2020 and requires broad protection of information (job description, ip addresses, web browsing history, and more personal data like addresses and more)

Red Gate software has an interesting comparison of the compliance and regulation issues in the USA.

In the case of ‘who’ is most affected by compliance or framework focus we need to define the audience first.  The audience for this blog post is the small medium business (SMB) person in charge of the business or the top IT person. An enterprise business will perform a framework, compliance, and all regulations eventually, the larger one is the more likely a framework has to make sense.

What will a SMB entity  decision require?

  1. Depends mostly on organization -how big
  2. How many people, computers, type of must have compliance
  3. The issue is how decisions are made from the business to IT

In the past for me I have been in these situations where I am in charge of the IT department and the decision process leads to the Operations Officer or President. Some business need is presented to either Officer and then I am tasked as IT to provide a solution to the basic business need (new computer system) or a bigger task like adding a new branch.

These basic decisions are not complicated decisions.  But they do set a direction of the company. When buying a new device does it get checked to see if it is configured for security? When designing a new branch system how will the new branch be integrated into the current systems?

Under PCI compliance all one needs to do is segment the network that the payment system is on and now compliance is easier to prove. Of course if that can’t be done due to business needs which integrate credit card payment and customer information then there is no segregation of credit card data with the  other streams of data in the company.

Whenever the lines between the compliance needs and the rest of the company become blurred is when a framework could help with a solution.

Governance is  when a group of people(the board) make decisions with a future direction in mind.  The decisions become more strategic, as several items are weighed: Business needs (CEO/COO), Cybersecurity (CISO), Information Technology(CIO), and other business leaders – depending on specialization.  Each new direction or decision, like starting to create branches of the company can be built in many different ways using technology.  The governance board will publish the decisions and create a security policy which talks about bringing your own devices on the network (only to go on guest network for example).

What is COBIT for example: “COBIT is an IT management framework developed by the ISACA to help businesses develop, organize and implement strategies around information management and governance.”  CIO.com has an article that gives a decent overview (a third party looking at COBIT – instead of ISACA review)

So 40 governance and management system objectives for establishing a new governance program. And most interesting we can use maturity and capability measurements. One can now truly keep all company factors in mind to create an IT governance strategy.

The difference with PCI compliance is stark, as PCI compliance needs a quarterly report with a method to review and solve vulnerability  assessments with a patching program. Basically a vulnerability management program will write the PCI compliance report without too many additional points.

So PCI compliance does not address how to make future decisions, although one can see how a decision could affect the compliance report. There is no mechanism  that says with A,B, and C you should do “this acme action”. In fact only Credit Card(CC) data is focused with the Compliance standard. The problem in an integrated environment (without segmented areas of the network to keep the CC data in) is to make open all devices to vulnerability management.

There are more regulations that focus on privacy data like ip addresses, physical addresses of customers, cookies, and any other possible privacy revealing data of possible customers. This would be the CCPA the California.

Another regulation is the NY SHIELD law which is a minimum cybersecurity requirements.  It also revises the current NY data breach notification law.

Courtney Bowman has a good blog post discussing this Act.

Don’t forget to include a pervasive testing regimen to help your IT staff validate the environment. The PCI compliance requires it and thus it is also in all governance initiatives.

Here is our focus (the testing of the environment) and we use tests and reports to help the governance board to make decisions to complete business goals.

Contact Us to discuss

Threat Hunting in Your Network

We should hunt for threats in our network – i.e. find possible attacks in our network to see what is being attacked and whether we  can start to counter the attacker’s moves.

In case you don’t know below is the ATT&CK MITRE framework green highlights are the items you may want to pay attention to.

Olaf Hartong has a few scripts developed that will help find the potential  Sysmon Indicators of Compromise(IOC).  He uses sysmon (Microsoft events created by Sysmon)  that will help us find the IOC’s.

Focus on events that

  • Process creation (with full command line and hashes)
  • Process termination
  • Network connections
  • Various file events
  • Driver/image loading
  • Create remote threads
  • Raw disk access
  • Process memory access
  • Registry access (create, modify, delete)
  • Named pipes
  • WMI events

Olaf’s sysmon-modular github repository

The idea is to use a ruleset that works in your environment that is not noisy(has too many log events which are not useful)

I found Olaf’s page from a youtube presentation on my Security news Analyzed page from  IronGeek’s Bsides Cleveland Videos Specifically “Operationalizing MITRE ATT&CK Framework”

Here is the relevant screenshot:

So we can use sysmon to see specific events on the MITRE framework which will help us understand whether we have an attacker in our network.

This will further enhance our ability to make adjustments to our network  as we see attacks move from system to system. Each network is different and thus requires  unique methods. But it is good for some automation as the number of log events can be staggering. We do not want to drink from a firehose.  We will just get wet.

Contact us to help you evaluate this for your environment.