Securing organizations IT infrastructure from modern threats
From Notes_Wiki
Home > Security tips > Securing organizations IT infrastructure from modern threats
The threat to organizations can come from:
- Identity
- Identity theft
- End point
- Zero Day vulnerability, Unsafe remote access software, Misconfiguration, Packet capture at untrusted network locations, Malware (Trojan such as keyboard or screen loggers),
- Application
- Vulnerability, DDoS, Session hi-jacking
- Insider
- Phishing, BOTS,
- Third-party
- S/W Supply chain based attack
To secure an organization against modern threats consider following:
- User VLANs gateway at firewall
- This can be done via perimeter firewall or we can have a separate internal firewall
- All VLANs should be only L2 at core / distribution / access level. Gateway for all users should be at perimeter firewall. All traffic between two user VLANs should go via firewall.
- In this case have separate VLANs for printers, Biometric devices, CCTV, etc. While the printers can be accessed by end-users, other way round firewall should not allow a printer to initiate communication with an end-user machine.
- Micro-segmentation between servers
-
- Similar to user VLANS, it would be ideal if server to server traffic is also protected via firewall. This is very useful to prevent lateral movement of attacker the way we try with user-to-user isolation via firewall. Modern micro-segmentation tools good give useful alerts and are very quick to deploy. They have very intuitive UI to help deploy very stringent micro-segmentation policies.
- Perimeter firewall
-
- There should be perimeter firewall which ensures that all non-work related categories are blocked in URL filter. AV / IPS / Anti-spam filter / file analysis etc. features if available can be enabled. We must do SSL interception at perimeter by deploying custom CA at firewall and pushing this CA via AD to all end machines.
- In terms of Internet access all LAN to WAN rules should be severly limited to only required ports / applications. For example most remote access applications should not work. We should not have option to query public DNS such as 4.2.2.2, 8.8.8.8 from all end stations. Ideally consider using 1.0.0.3, 1.1.1.3 etc. cloud flare family DNS wherever possible.
- For general LAN-WAN Internet access rule severely restrict only based on required categories and block all other. Especially consider blocking malware, adult, VPN, spyware, proxy, remote-acess, etc. categories which have serious security implications.
- In terms of Server access from LAN to DMZ again rules should be very strict. Only users who need access to a given server port / application should be allowed in firewall. We should not have all ports open for any server for a user or particular ports open for a server from all users as much as possible.
- There are many servers/services which are not required for general users such as IMPI/BMC, Hypervisor base host access, Storage or network device management. Such connections should not be allowed both from LAN and also from DMZ (Production VMs). This can be done via perimeter firewall by ensuring switches and management are in different zone (or at least different firewall interface).
- Ideally have isolation between test / dev / QA and production environments via firewall zones
- Ideally have isolation between servers with some incoming public access (Public IP/port NAT) and servers which are only accessible internally (Intranet applications).
- Servers should not have access to LAN. So basically DMZ to LAN should not be allowed
- Firewall public access should be very secure. Change default admin username or create a admin user with non-obvious username and disable default admin. Use very strong password for perimeter firewall. Ideally enable MFA for firewall admin access.
- VPN users also should have limited per-user access only. For example if VPN user1 only needs access to one server on particular port then over VPN that user should be allowed to access only that required server:port and nothing else.
- From other branches / locations / units also the access to HO or vice-versa over IPSec should be severly limited as per requirements. We should not have open IPSec tunnels which allow entire subnets at one location to access subnets at other location without firewalling.
- Make good use of Geo-IP filtering where we can restrict traffic from other countries. If there is high chance of attacks to originate from hostile countries then that can be protected by large extent using Geo-IP blocking.
- Also consider blocking other bad sources based on business:
- Tor / VPN / Proxy IPs. These can also be blocked at application level / category level in firewall.
- Known IP ranges of public clouds (AWS, Azure, GCP, etc.) or even digital ocean, linode etc. whichever are published and not needed
- Secure wireless setup
-
- Instead of having a common network for all users for wifi which defeats the VLAN separation done in wired network, we need to plan for as many wireless SSIDs (networks) as there are wired networks. Hence there should be 1:1 correspondence between all wired and wireless networks. Same security policy should apply to individual irrespective of whether they have connected to wired network or wireless network.
- A properly configured controller based wifi which will allow creating multiple SSIDs, changing WAP2 passwords centrally, monitor rogue APs and rogue machines, control power / channel for efficiency, etc. is a must for every organization even from security point of view.
Backup aspects
- Backup as separate zone in firewall
-
- Avoid clubbing backup along with other servers in DMZ. When a server is compromised attackers get access to backup server over LAN / subnet from this server. We want to protect backups even when a server is already compromised. Hence we dont want to allow servers to initiate communication with Backup infrastructure. Only backup servers should be able to initiate communication with a few servers (ESXi/vCenter for VM level backup, Agent port and machine IP where there is backup agent installed for agent based backup, etc.).
- BMC (iDRAC, iLO, etc.) of backup server should also be in backup zone. Any backup related storage including storage management should also be in backup zone
- BMC / IPMI of backup related servers should not be left connected if it has options for factory reset / reimaging backup appliance. Typically such BMC / IPMI access is not protected by MFA. We dont want an attacker to be able to factory reset / reimage these appliance using IPMI. We can still configure these in backup zones during initial deployment but leave the network cable disconnected unless needed.
- Dont enable remote access for backup servers
-
- Backup servers should not be remotely accessible even for IT team members, not even for a single person. For any backup access the IT team should be forced to move away from there desk to another location, preferably physically protected data center. No remote access to backups should be possible.
- Immutable backups
-
- All backups should be immutable. It should be impossible (Compliance) even for administrators with all usernames, passwords, OTP, tokens, etc. to delete existing backups before they expire. An administrator should only be able to change future backup goals and retention period. Existing backups should not be remotely modifiable by anyone.
- Initially backup should be done in governance mode during deployment. Then once everything is stable and as per expectation we should switch to compliance mode.
- If immutable backup is not possible then at least backup should be on a separate storage, ideally in near-DR / far-DR and not in the same DC.
- Ideally avoid backup solutions that involve multiple OEMs. Have a single OEM for entire backup including software (Preferably on appliance), hardware, MFA, Possibly tape, possibly cloud sync to OEM specific cloud storage, etc.
- Offline / Offsite / Cloud backups
- (3--2--1 or 2n+1 Backups )
- Apart from on-disk backups there should also be offline backups on tape, off site backup on tape / USB / Cloud etc. (Immutable) if possible / air-gapped if possible.
- Longer backup duration
-
- We should avoid having scenario where we only have last 7-8 days of backup with 1 full and rest incremental. We should try to have at least 1 full backup going back 3-4 weeks. Then we should have at least one full monthly backup going back 2-3 months. These durations can be more based on organizations budget, compliance / security requirements, etc. But having only a few days backup may not be enough typically as by the time we realize something has gone wrong 6-7 days might already have been passed.
- Also just one full backup and rest incremental may not be enough in case there is some corruption issue with that single full backup
- Backup tool internal database backup
-
- We should do backup tool internal database or configuration backup at all the backup sites (3--2--1). Each site should be independently enough for any kind of restoration
- The configuration backup should also be automated. This automated configuration backup must be immutable as well.
- Backup restoration steps and validation
-
- We should have very clear tried and tested steps for validating restoration of backups. This restoration test if manual should be repeated once every six months for all applications. If automated this validation should be done at least once every month for all backups.
- We need to test restoration using only offline / offsite. For example if backups are going to cloud then only using cloud data without referring to anything on-prem not even current backup server, the data should be restored on some external site and validated that cloud backup alone is sufficient for restoration without requiring even a single file from the organization DC, in case restoration is required in worst-case scenarios.
- Similarly if backups are happening to tape, then restoration only using tapes must be done. We should not need to connect to the production backup server to make sense of tape backups.
- HA or Replication to DR site
-
- If there is existing replication to DR site, check if the backup tool supports multiple point-in-time snapshot to be maintained at DR along with the latest production copy. This way the replication mechanism can also provide some kind of backup. However, please note that this is not immutable and hence not at all protected against targeted attacks.
- Backup hardware should be different
-
- Backup server hardware should be different then existing production hardware. Dont create backup on VMs on top of production servers by just isolating at network level. Dont use the same underlying storage server for storing both production data and backups. If there is any hardware issue either with backup hardware or production hardware the other one should still function. If we keep backup and production data on same storage even on different LUNs or disk pools or RAID groups, we still have a risk of entire corruption if there is some software / hardware / security issue with the underlying storage server.
Server and End-point protection
- Protecting from malware (EDR / XDR)
-
- Instead of traditional signature based anti-virus prefer using EDR/XDR which look at behavior of application during execution and dont maintain any static DB of hashes to compare against.
- Ensure tamper-proofing so that agent cannot be killed, uninstalled, disabled, etc. without a specific key / password from administrator is enabled
- If selected end-point protection option has firewall and USB blocking then consider using them to ensure that only required people have USB access and not everyone can connect printer / USB etc.
- If selected end-point protection option has list of vulnerable applications listed, see if those applications can be updated / uninstalled and/or some alternate can be found for those.
- Some end-point protection software might have:
- Log monitoring similar to SIEM
- Application blocking similar to host based IPS
- Email alerts when something is blocked or suspicious
- Daily / Weekly executive / summary reports
- Consider evaluating all of these as well, if included as part of the license
- File Integrity or Configuration monitoring
- (Possibly via SIEM or through some other independent mechanism)
- There is a need for program which monitors all critical configuration files, binaries, registry entries (In case of Windows), Startup services, etc.
- One very important target for monitoring in Linux machines is /etc folder along with sub-folders and files. We should be able to correlate every change that happens in /etc with our action such as user creation, password change, package installation, some OS configuration change, etc. If there is a change in /etc/ which we are not clear about (Cant correlate with any human change) then we have to be extremely suspicious about it.
- We should monitor all startup related changes such as enabled services (in both windows and Linux), cron entries or scheduled tasks, new files in startup folders in both OS, etc.
- There is also need to monitor on which ports there are processes listening. If there is a new port / process which is now accepting remote connections over network, we must inspect what that port / process is thoroughly.
- These changes should lead to appropriate log generation which can be acted on by SIEM
- There is a need for program which monitors all critical configuration files, binaries, registry entries (In case of Windows), Startup services, etc.
- OS hardening
-
- As mentioned at CIS Benchmarks we need to harden all physical servers, VMs, even golden images and templates so that by default the privileges on any end-user machine are very limited and only business purpose related. Users should not have any additional privilege (eg ability to create scheduled task, administrative access, power shell access etc.) if not required. We can get CIS hardening status of end-machines and servers via Wazuh SIEM. We should also look at OpenSCAP based hardening for both Windows and Linux end-points as well as servers.
- Specifically for Linux including above suggested hardening consider points mentioned at CentOS 8.x Securing a Linux machine which are typically applicable for almost all flavors of Linux and not just CentOS 8.x
- Consider having multiple security plugins to protect tracking and scripting attacks in browsers. Such as for firefox consider using Recommended firefox plugins
- MFA everywhere
-
- For all critical components which support MFA such as immutable storage appliance, firewall, Server Windows / Linux OS, Backup application, public cloud accounts, Email etc. we need to implement and use MFA
- On Linux servers we can enable Google-authenticator based MFA so that after entering correct username, password we are also forced to enter TOTP before login succeed.
- Agent protection
-
- Once we install some agents on each machine which perform some operation for XDR, monitoring, SIEM, etc. then we also need to worry about protecting that agent and entire management server to agent ecosystem also. For example in case of XDR if we loose XDR access then attacker can use XDR console to run desired commands on end machine. Since these would be coming from XDR console, the XDR agent itself may not protect against this.
- We have seen nagios agents being installed with user ID:password of nagios:nagios and then this account being used to compromise systems.
- Any centralized system being used to manage / monitor / configure large no. of systems need to be protected to safeguard against this already built communication / trust network. This is also applicable to ansible, chef, puppet, etc. systems.
- Any ability to execute commands on all end-points / systems need to be protected with adequate layers of security proportionate to the impact in case these central systems get compromised.
- Data leak prevention / VDI
-
- If organization has very important IP that can be stolen then they should also consider investing in DLP or VDI.
- In case of DLP we need to invest considerable time to ensure we implement the required DLP policies as per organization needs. Note that DLP might have some inbuilt patterns to recognize mobile number, email ID, Credit card details, passwords, etc. We need to combine them with file formats such excel, word, pdf and then action email / upload and a target such as google drive, dropbox to be able to protect organization from most basic type of exfiltration attempts.
- DLP is likely to have default categories for encrypted data, action related to email / file upload, Category of sites which can be used for taking data back such as pcloud, wetransfer; Restriction on what can be copied to clipboard; When screenshots can be taken, When printouts can be taken etc. inbuilt. We need to use those to define strong policies.
- Similarly in case of VDI we can allow users to work on remote VDI desktop without allowing copy / paste, file migration, etc. This can help a lot in protecting IP from being leaked. In this case all IP is always within VDI VMs in DC and they are never supposed to leave that environment.
- We need to be vary of use of mobile phone camera and related scanner (OCR) as a possible way of data leaving the organization.
- If organization has very important IP that can be stolen then they should also consider investing in DLP or VDI.
- Mobile device management with disk encryption
-
- All office assets should be under MDM umbrella. All office laptops should have disk encryption with option of remote-wipe / remote disabling. If we dont do this then we risk loosing data via:
- Live OS boot using live CD / USB
- Theft of disk (Temporary or permanent). The same disk can be put into another system to be read
- All office assets should be under MDM umbrella. All office laptops should have disk encryption with option of remote-wipe / remote disabling. If we dont do this then we risk loosing data via:
Monitoring and asset tracking
- Automatic patching / Asset management / Ticketing / Monitoring
-
- Tools that help in monitoring health of servers, switches, applications, etc. along with help track which assets (both hardware and software) are allotted to which individuals, along with automated patch management (at least for OS) are required. Without this we may not learn about some service being down, which assets are used / assigned to which individuals and miss out on critical security updates from being applied to official assets.
- Many times things as simple as free disk space / CPU usage / RAM usage monitoring is missed leading to considerable downtime issues. While this does not leads directly to security issues, but not monitoring these for backup storage / backup server can be a serious security concern.
- We also need automated asset discovery to discover new assets in the organization. Also for existing assets we should be able to search list of installed applications or patches. We should be able to search for some parameter eg IP address, MAC address, etc. across all our assets using the asset management and inventory systems.
- Password and/or privilege management tool
-
- First and foremost passwords should be complex, random and unique. We should not use Org@123, password, Ubuntu etc. as passwords. The same password should not be used anywhere else. We only need to remember one password (of password management tool). Rest can be obtained from the tool.
- 2FA / MFA protection for password management tool is extremely critical.
- Passwords should be shared using password management tool. We should not have shared text files / excel files / password protected excel files etc. for passwords
- Some password management or privilege management tools do login on behalf of user without giving the actual password to user
- Some password management tool rotate passwords every day. Thus, even if an authorized user stores password today in a file / physical notepad, that password would not be useful from next day onwards
- Some privilege management tools require approval from one other individual before even an authorized person can access a specific resource.
- Many privilege management tool may require network to be designed such that administrators can only access the privilege management tool from their desk / network. Only privilege management tool after authentication and MFA may allow the admin to access the servers / datacenters via the tool.
- If we use keys for SSH then those keys need to be protected by password Passphrase for ssh-keys. Consider keeping passwords for such shared keys on some common servers also in password management tool.
- Users need to keep password management tool recovery key / forgot password questions / One time passwords valid forever etc. very safe.
- If there is provision of fireproof safe then perhaps a printout of passwords in the fire proof safe can be useful in case the password management tool itself is down due to bug, hardware failure, network issues, compromise etc.
- We should try to export the users passwords hashes (if allowed as per company policy) and do bruteforce attempts on user passwords with normal english dictionary attacks for reasonable (at least 3-4 days) period of time. This way if users passwords are password, secret, abcd123, etc. then we can crack those and then ask users to set more complex passwords. This is required if password complexity was configured later on or if for root / administrator user password complexity is not enforced, etc.
- SIEM including Central logging
-
- Along with basic monitoring for health there should also be central logging with analytics and security alerts for these central logs (SIEM). SIEM is also critical for doing BAS (Breach Attack and Simulation) and also for forensic analysis post-incident.
- SIEM (Centra logging) is only useful if it gets alerts related to system changes. If there is no log generated while we create a local user account, change password, bad login attempt, change startup procedure, setup a new service, disable firewall, etc. then that SIEM / Central logging service is not going to very effective.
- Hardware monitoring
-
- We should also have mechanism to monitor against hardware failures. These failures should generate emails or call-back to OEM for part replacement.
- There is need of periodic manual validation of the hardware health by visiting datacenter or accessing BMC (iDRAC, iLO, etc.) to ensure that hardware is healthy.
- 24x7 monitoring
-
- Once there are tools such as XDR, monitoring, SIEM, backup etc. we need dedicated teams to monitor the related alerts and dashboards 24x7. If the tool alerts and dashboards are not monitored continuously then we are likely to loose out on critical information already captured by the tool during / before severe attack. A good way for doing this is to engage with NOC/SOC vendors who have teams dedicated to go through intelligence feeds, published CVEs, News articles, etc. and do proactive threat hunting in the environment before it is too late.
- A typical organization has attacker moving around for 200 days before they are detected and around 70 days time is required to fix the breach and restore complete operations after initial detection. Hence a dedicated specialized team to do 24x7 monitoring is very important for critical infrastructure - Banks, Hospitals, Defense, Industry (eg Pharma), etc.
- The select NOC/SOC vendor themselves should be:
- Certified with SOC2, ISO 27001, etc. certifications
- Have option to visit customer location in-person (on-site visit) or have a backup link only for SOC/NOC connectivity
- Ability to also do managed services (L2-L3 support) and not just inform about issues. The SOC / NOC vendor should help customer continuously improve their security posture and not just alert them about issues
- There should be proper recording of evidences (VM snapshot including memory, emails, logs, online meetings post incident for discussion, screenshots / desktop recording, older backups of configuration, etc.) during incident by NOC/SOC vendor
- Cyber insurance
-
- Once we have taken required security measures consider looking into cyber insurance. This helps with both validation of environment as eligibility and premium of insurance will depend upon the implemented security measures. Also if we go for insurance than we have some financial cover in case we are targeted by any attacker and we loose business function due to that attack.
Health checks and validating security
- VA-PT
-
- Through asset or patch management any way we are likely to get list of all assets having old unpatched versions. After that we should also invest in VA-PT to get the environment validated by security experts so that if there is any known vulnerable service, application or servers, we can learn about it and patch it.
- We can get list of vulnerabilities via internal scan of applications within the machine via XDR such as sentinelone or SIEM such as Wazuh.
- Breach Attack and Simulation (BAS)
-
- BAS is typically used to check how is security / monitoring within the environment. Once we have invested in XDR, SIEM, 24x7 monitoring etc. we also need to validate whether during an actual attack are we getting SIEM alerts, is the team able to detect malicious activity, do they know what immediate actions can be done during an attack without waiting for discussion / approval as immediate response, etc. BAS tools will help in checking the same.
- BAS and VA-PT both are periodic exercises and must be done at least once every quarter or even more frequently in case of automated setups.
- Auditing
-
- All critical infrastructure such as backups, firewall policies, administrative accounts for firewall / AD / Email / OS etc. should get periodically audited. We should also look for potential back doors (Eg public keys, Sudo access) that might have been left by an application, past employee, attacker, etc.
- Creation of administrative users, firewall security policies, etc. should be tracked via ticketing and change management. All actions done on critical infrastruture during audit should be traceable back to a change request / related approval.
- Ideally organization should have Industry certifications such as ISO 27001 to ensure that proper policies and procedures are in place for information handling. Going through these certifications can help in setting up many audit / monitoring related processes suggested in this article.
- It is also beneficial to get audited via legally recognized entities. For example in India it makes sense to get audited by CERT-In empaneled organization. This gives lot of legal value to the validation / audit being done and should help reduce legal issues in case of any untoward incidents.
- Security awareness trainings by reputed external trainers
-
- There should be proper official security awareness training for all employees such as dont reply to phishing emails, dont download cracked / unlicensed / unvalidated software on office machine, dont click on email links sent by unknown people, dont get fooled by fake promises of awards / gifts / discounts, etc.
- There must be a second channel (alternate) for validating authenticity of all critical messages before any action is taken on them. For example if finance team is asked to do some bank transfer via email / Whatsapp, they should call via landline or try to meet in person and get the order verified before initiating the transaction.
- Note that anyone can send email with any from address; any timestamp etc. Dont believe email just by looking at a few parameters
- Latest threats due to GPT, Deep fake, etc. technologies that did not exist a decade ago and are very easy to setup for malicious people
- There is need for Incident response / security training with higher technical content for IT teams. This could be Certified Ethical Hacker or other related training and trainings on preserving forensic evidence (eg taking VM snapshot with memory before killing attacker process, etc.).
- Email banner
-
- For every email that comes from external servers there should be a banner on top of email indicating that this email is external. Also users should be trained again and again that for internal emails within company the banner will not be present. Hence, if they see some banner on email which is supposed to be internal (MD or CFO to finance asking for some bank transfer), just by looking at the banner the team should realize that it is a phishing email.
- Secure file sharing
-
- We should avoid having world-writable shared folders accessible by everyone in organization. These shared folders become very easy way for malware to spread. We should have password-protected shares and each person should have read/write (limited) access to the share only as per business requirement.
- This file sharing system should support versioning, sync to cloud and audit trail at the minimum.
- Honeypots
-
- One way of monitoring security is via honeypots. For this setup easy targets in both LAN-DMZ with weak passwords, older vulnerable application versions, useful sounding names such as passwords, auth, backup, etc. so that these machines become a lucrative target for attackers. We need to monitor this machines without letting this monitoring being detected easily. Any attacks on honeypots especially in LAN or DMZ with only private IP should be taken seriously.
- If we are being targeted on WAN also by adversaries then perhaps consider having a second DMZ eg DMZ2 or Honeypot and configure a few WAN facing (NATted) honey pots servers with WAN access also.
Home > Security tips > Securing organizations IT infrastructure from modern threats