IT. SECURITY. OPEN SOURCE.

Author: jason Page 1 of 2

policies

Writing Policies, Standards, and Procedures for Your Next IT Assessment

Writing policies can be hard, writing good policies can be even harder. Though writing policies, standards, and procedures are often last on people’s minds, they are however a necessity. When building a cybersecurity program, not only are technical controls important but policies, standards, and procedures (PSP) are required documents for any information technology assessment. Whether it is for ISO 27000, NIST, SOX, or any other type of certification or framework, auditors will want review your documentation.

Auditors want to ensure that your PSP’s back up your IT program. They will evaluate how the technology is configured and implemented. The auditor will review your PSP’s to ensure that IT governance has been established for the organization. The following describes what policies, standards, and procedures are and how to craft a document.

Policies

Policies are overarching documents which provide an overview of the control objective. They are high level documents which establish IT governance and intent of applying administrative and technical controls. Policies are high level, allowing anyone to view the document without giving away specific technologies or how they are implemented.

“Policy” is often confused for any written document, which include standards and procedures. Policies do not go into detail of how a particular technology is configured or the process of how to do it. The document is used to show intent that these are put in place.

Standards

Standards are medium to low-level documents which depict how a device is configured or acceptable technologies to use. A standard must detail the levels of adequate encryption or what SPF, DKIM, or DMARC settings must be configured for email. Standards are used to backup what was previously stated in the policy.

A standards document what is going to be done without going into detail of how to do it. That is the responsibility of a procedure document. A standard would go into details of acceptable encryption. A procedure would then detail how to configure it for a piece of software. Standards state that guest accounts should be disabled, the procedure would depict how to disable it.

Procedures

Would your co-workers know how to perform your job if you were to win the lottery? Procedures must be written so that anyone can follow and understand. It should be simple enough that a new hire, or a junior admin could perform a function without much trouble. Procedures focus on how a job function is performed without getting into details of what is acceptable. They can be in the form of swim lanes or procedural, however the document must state who is responsible for what job function.

Policies, Standards, and Procedures Documentation Layout Framework

All too often I encounter an organization that has combined their PSP’s into a 30 – 100 page document. This makes searching what you are looking for extremely difficult, especially when you need to locate a document for a control. This is where breaking up documents is helpful. When you follow the methodology that procedures are written to back up standards, and standards are written to back up policies then you are on the right track. The Open Policy Framework provides a way to accomplish this.

When looking at the framework layout, the policy is on the left hand side with its subsequent standards and procedures falling beneath it. An example would be, document 100.00 is the Information Security Policy with each standard and procedure below. Acceptable Use is 100.01, Encryption 100.03, and so on. Procedures would follow the same numbering scheme with 100.03.01 being a procedure of how to configure an Apache server for instance. This numbering scheme allows one to easily locate the information needed.

Writing policies, standards, and procedures may be the non-sexy side of information technology or security but it is a necessity. When scoring a maturity level for an organization, not only will an organization need to have appropriate technical safeguards in place but also documentation to back it up.

Building a Successful Cybersecurity Program

Over the years I have had the opportunity to develop successful cybersecurity programs for many organizations. When creating a cybersecurity program, an organization must know where it is at and where it wants to go. Though not a requirement, cybersecurity frameworks or standardization documents written by experts in the field are helpful in designing a program and roadmap. Without this, the cybersecurity program will have no direction and will not achieve the organizations goals.

Get Executive Buy-In

You were just hired into an organization or promoted to work on their cybersecurity program. Congratulations! Now what? You need to gain executive buy-in for the program. This can be easy or extremely challenging. Chances are if you were hired as their first cybersecurity employee, the organization is taking this seriously. However, this does not mean it will be smooth sailing. You may need to teach cybersecurity best practices to those in the executive suite. This will ensure that the program is well understood and can be prioritized within the organization.

In addition to gaining executive buy-in for the program, you need to provide reoccurring status updates. This can be in the form of sending out email statements to having monthly meetings with executive stakeholders. This is needed to provide updates on where the organization is at with the cybersecurity program. It is also a perfect time to solicit feedback from the leadership team of how they see the program has progressed, any shortcomings, or express any concerns they may have. A continuous feedback loop is needed to ensure that the security program is meeting objectives set out by the business.

Pick a Cybersecurity Framework

There are plenty of cybersecurity frameworks to choose from, but which one is right for your organization? First, you must decide whether you need to aim for a certification for a given framework such as the ISO 27000 series. If certification is not a top priority, you can choose from some of the other well known cybersecurity frameworks such as the NIST Cybersecurity Framework or those developed by the Australian Cyber Security Centre.

For organizations who are just starting off and looking for a well rounded framework to choose I recommend using both the NIST Cybersecurity Framework along with the Centers for Internet Security Top 20 Security Controls. Why two you ask? The NIST Cybersecurity Framework is a great framework to standardize the organizations administrative controls. These would be considered your policies, standards, procedures, and guidelines. The Centers for Internet Security Top 20 Security Controls is a framework for your technical controls. These controls aim at how servers or networks are configured, having antivirus deployed, or a robust patching cadence. These two frameworks complement each other well and provide the foundation for how you want your security program.

Perform An Audit

The audit stage is critical as it will determine the outcome of your organizations current and future states for the cybersecurity program. Take plenty of time to review the selected framework as this will guide you through the audit process. Prior to performing the audit you must gather as much information as possible about the organization and its IT resources. Gathering documentation, architectural drawings, application flow drawings, policies, standards, and procedures, along with reviewing previous audits will help you gain insight into the environment.

Once the collection and review of documentation is complete, you then begin the interview process. The interview process is designed to gain additional insight into the environment that was not discovered during the document collection phase. When performing the interviews one must first consider their audience. Are they technical or non-technical? Will they understand what is being asked? Taking that into consideration will assist in getting to the answers you are looking for.

Determine Your Current State

When the audit is complete, it is time to perform a Gap analysis. The Gap analysis will help to determine the current state of your cybersecurity and information technology programs. Cybersecurity objectives from the framework that the organization met mean you can close that objective out. Any deficiencies found during the audit will become findings. The contrast between the objectives that you meet versus the ones you do not will be the outcome of the Gap analysis. This analysis will be the current state of your program. This current state report is what is to be presented to senior management or a steering team committee.

Develop A Future State Cybersecurity Roadmap

Now that the current state is defined, its time to define the strategic roadmap or future state of your cybersecurity program. The future state is where you would like to see your cybersecurity program 6 months – 5 years out. The Gap analysis performed when developing the current state will help define this for you. Objectives which are easy to implement can be put in place fairly quickly, ones that take a fair amount of planning and funding to implement will be placed on a strategic roadmap for future implementations.

Determining where to start can be daunting however there are two places to look for assistance. If you chose to use the Centers for Internet Security framework, this is already laid out for you. The Top 20 controls are in order of what should be implemented within the environment. The first 6 controls are what they call, “Basic.” As the name implies, these controls build the basic controls of the program which include:

Centers for Internet Security – Basic Controls

  1. Inventory and Control of Hardware Assets
  2. Inventory and Control of Software Assets
  3. Continuous Vulnerability Management
  4. Controlled Use of Administrative Privileges
  5. Secure Configuration for Hardware and Software on Mobile Devices, Laptops, Workstations and Servers
  6. Maintenance, Monitoring and Analysis of Audit Logs

If you decided to choose a different framework do not worry, there are other ways to determine your starting point. This will include the help of the Board of Directors or executive management. By understanding the business requirements, needs, and wants will drive the direction of your cybersecurity program, however push back where it makes sense.

Restrict AWS Console Access Based On Source IP Address

Zero trust, or risk-based authentication, can be hard to achieve (You can read more about it here). Organizations must trust the identity being used and the location from where the user is authenticating. Many cloud-based services, like AWS, have functionality built in to help protect your account. This is a must in preventing account takeover (ATO) while protecting the confidentiality, integrity, and availability of your AWS systems.

AWS’ built-in tools help protect your account which is easy to use. It is an automated process to validate that your Root account has multifactor authentication turned on, the Root account does not have programmatic access, etc. One function that is missing from the GUI is protecting accounts from untrusted networks. To do this, go to IAM and click on Policies. Create a new policy and use the JSON editor to paste the following:

{
  "Version": "2012-10-17",
  "Statement": {
    "Effect": "Deny",
    "Action": "*",
    "Resource": "*",
    "Condition": {"NotIpAddress": 
    {"aws:SourceIp": [
      "Source IP Address"
    ]}}
  }
}

Replace “Source IP Address” with your source IP address(es) of your corporate network.

Once the policy has been created, attach the policy to either a user account or a group that users are apart of. Now when someone tries to log in, from outside the network, the person will receive an “Access Denied” while trying to access any AWS resources.

For the latest on this policy, or other AWS policies, please check out the GitHub Repo.

PyMyDB – Simplifying MySQL Backups

Let’s face it, when was the last time you performed a database backup? How about any backup??? PyMyDB was written to help ease the burden of performing MySQL and MariaDB database backups.

Statistics have shown that businesses do not perform regular backups on their IT resources. Worse yet, many businesses close up shop after a major catastrophe due to inadequate backups. What does your backup solution look like?

Written in Python3 and using Boto3 and MySQLdb Python modules, PyMyDB connects to the server, queries all databases that the user has access to, and then performs a MySQL dump. To connect to the database, you must have an account with Amazon Web Services and use Secrets Manager. Secrets Manager is a simple tool allowing you to securely store any number of secrets.

The script looks for three stored secrets in Secrets Manager, username; password; and hostname. Once that information is saved in the system, grab the storage location and region name and plug that into the script. When working properly, the script will perform a backup of all the databases stored on the server.

The script can be found at: https://github.com/jasonbrown17/pymydb

***Update 20201021***

The PyMyDB script now has S3 upload functionality. Add the S3 bucket name to the variable for offsite backups.

Building Zero Trust in Authentication

Building Zero Trust

When you think of zero trust, you tend to think of network segmentation. Creating communities of interest (COI’s) and segmenting servers away from each other to prevent lateral movement within one’s network. Network segmentation is just the first step in a zero-trust model. Others include authentication, segregation of duties, and cryptographic certificates. Though all are important, authentication is a difficult one to get right.

Secure Shell, or SSH, is an authentication mechanism used for remote connectivity mainly to UNIX and Linux based operating systems. SSH creates a secure, encrypted, connection between the administrator’s endpoint to a server. Though SSH is heavily used for connectivity it does have one major flaw, you must trust the certificate presented to you upon the first login. Trust On First Use, or TOFU, requires the administrator to initially trust the server they are connecting to without knowing the validity of the certificates being presented. Once trust is given to an unknown certificate, the administrator is allowed to continue with their username and password.

Trust On First Use

TOFU is nothing new in terms of security and is widely used throughout information technology. Take for instance setting up a new firewall or other types of security appliances. Security appliances are designed to encrypt administrative connections to the management plane. Most, if not all major players use self-signed certificates in order to encrypt that communication. This is another example of TOFU where a firewall administrator must first accept an unknown security certificate prior to being allowed to connect to the device. Accepting unknown, or unverified, TLS certificates are something we tell end users not to do all the time. So why is it Ok for us to do so? How do we know whether our connections are trusted and not being compromised by a man in the middle attack?

How Do I Trust My Connections?

Unfortunately, there is no easy answer to this question. Systems are designed to create new private and public certificates upon installation. This is to ensure that no two private certificates are identical and not re-used between systems. When these certificates are generated for the first time, an administrator has no other option but to trust the certificate being handed out.

Cryptographically signing certificates is one such way that could be used to overcome this problem. This is performed on software package installation and patching. Microsoft, Red Hat, Ubuntu, and Apple all cryptographically sign their software. This prevents the operating system from installing applications that could have been created by a malicious user, potentially infecting one’s machine.

Organizations can perform the same level of trust for authentication. Creating an internal Public Key Infrastructure, or PKI, can reduce that uncertainty. PKI’s can provide validity in those connections as one has to first trust the root certificate authority. Once trust has been established, certificates generated and signed by the root certificate authority will also be trusted by the system. Though getting it right the first time might seem to be difficult. Once established, it can prove to be one of the best assets security professionals have in their arsenal.

BIND Response Policy Zones

Domain Name Service

Accessing resources across the internet is done through the use of IP addresses. When trying to access your email, Google for searching, or your favorite social media outlet, you are making a connection to the their IP address. The Domain Name Service (DNS) converts a name to an IP, allowing you to easily remember your favorite website. For instance, DNS will covert www.amazon.com to 89.187.178.56. How could we use BIND and DNS to thwart the bad guys?

Response Policy Zones

One of the best, unknown features of BIND is its use of Response Policy Zones (RPZ). RPZ’s allow an administrator to re-write a DNS query and send it back to the user. In the example above, when a user goes to access Amazon, DNS converts a name to a number. Once the web browser knows that number, it then reaches out to the server to access its resources. What if we were to manipulate that number, or make it where Amazon did not exist to our users?

This is where the functionality of RPZ’s come in. By configuring BIND to receive a DNS recursive lookup and manipulate the response back to the user, you can effectively stop users from accessing malicious sites.

Let us look at the recent privacy and security concerns related to Zoom. Due to its popularity and ease of use, the Zoom video conferencing service has not become a front runner. Not only has Zoombombing, where an uninvited user gains access to your video sharing stream, become a headache for the service but so has phishing websites. Recently, URL’s listed as zoompanel.com and zoomdirect.com.au have sprung up. These websites are used to phish a users Zoom credentials. We can use RPZ’s to block company personnel or home users from accessing those websites, mitigating the attack.

How Do RPZ’s Work?

When properly configured, a BIND RPZ file will return a different IP address than the one that is published on the internet. The following will return a valid IP address for zoomdirect.com.au.

nslookup zoomdirect.com.au 8.8.8.8
Server: 8.8.8.8
Address: 8.8.8.8#53


Non-authoritative answer:
Name: zoomdirect.com.au
Address: 119.81.45.82

The query responded with the IP address of 119.81.45.82.

What does a RPZ response look like?

nslookup zoomdirect.com.au ns1.svarthal.net
Server: ns1.svarthal.net
Address: 45.19.203.68#53

** server can’t find zoomdirect.com.au: NXDOMAIN

From the example above, the response changed from 119.81.45.82 to NXDOMAIN. This means that the response came back with nothing, making the phish server non-existent.

DNS Architecture

Deploying a local BIND DNS server for an organization can be quite daunting. There are a multitude of options available within the configuration of the service. Though secure configurations are extremely important, one must not overlook how to architect its set up within a network. Architecting the service correctly from the start will ease configuration headaches further down the road.

Network Segmentation

Network segmentation must be considered when standing up new systems. Segmentation is performed through the use of a firewall or proxy device which restricts network traffic. This restriction provides necessities by blocking unused protocols, access to network ports, and access from systems and services outside of the segment.

To segment services properly, an organization can either segment individual systems from each other or by creating a Community Of Interest (COI). A COI is the combination of like systems within a network segment. For example, a DNS COI is the creation of a network segment where all of the organization’s DNS servers reside. Though this is not as secure as segmenting each DNS server away from each other, this is more secure than placing all systems within the same, flat, network segment. Architectural diagram is shown below:

Primary DNS

When deploying any DNS service, whether it is BIND or a different system, the primary DNS server must be as secure as possible. The primary server is the source of all your DNS naming information. If someone were to gain access to this server, or if the system is misconfigured, they would have the possibility to change DNS records which would allow the attacker to redirect users to phishing sites. This is why an organization must deploy two or more secondary servers.

Hidden Primary

In a hidden primary configuration, the primary DNS server is never exposed to anyone. The organization configures its DHCP server to advertise the secondaries and not the primary. Firewall rules restrict DNS traffic to only the secondary servers and disallow anyone from accessing the server outside of the IT department. This is to limit the exposure of the primary server. By not advertising the primary server, it further reduces the amount of information gathered by a potential attacker.

Secondary DNS

As the primary server is never advertised to the users, or the public, the secondary servers are exposed. These servers receive naming information from the primary server and are configured to allow lookups. Firewalls or DNS proxies must be configured to only allow DNS traffic to traverse the network and limit the number of requests, preventing certain types of attacks.

The hidden master architectural concept is not something new, however it is not well known. By limiting the exposure of the master server, allowing users to only access the secondaries, prevents a number of attacks an organization could face. This is an important initial step when planning for an initial DNS deployment, or redeployment, of the service. Getting this right starting out will help secure the rest of the configuration down the road.

Svart Hal: The DNS Firewall

With the unprecedented circumstances we as a society are facing, we have begun to transition from on-premise to a remote workforce. Though this transition is exciting to some, it can truly bring undue stress to an organization, its workforce, and IT infrastructure. Organizations that are new to a remote workforce struggle to ensure that their employees are practicing good cyber hygiene while at home. Ensuring that data is backed up and secured, employees using work devices instead of personal devices when working, and maintaining updated software and antivirus. Many of these systems are automated, pushing software and updates and may not be able to communicate with an employee end point remotely.

Many organizations have an on-premise security stack where all traffic is funneled through. This would include all traffic which travels to and from the business network. A security stack could have a firewall, IDS/IPS, web filtering, antivirus mitigations, VPN, etc. While many organizations have these protections in place, it does not protect a remote workforce without the enforcement of every employee connecting to the VPN. This too places undue strain on the corporate network. Organizations which terminate a VPN connection on the perimeter firewall place additional stress on the device. Every encrypted connection placed on the firewall requires additional CPU and memory to encrypt/decrypt the connection.

Due to the additional strain on the network, companies have opted to have employees only connect to the VPN when accessing internal resources. While this frees up firewall resources, this places a significant burden on ensuring that employees are protected from security threats. As organizations scramble to better protect their remote employees they tend to lose sight of the big picture. The question ultimately becomes, “How can I protect our employees from security risks and do this in a cost effective manner?”

One way of protecting employees is through the use of a DNS firewall. Typical firewalls only block source/destination, port, and protocol traffic. A DNS firewall is synonymous with web filtering, taking care of blocking malicious URLs, protecting the employee from accessing websites which could push malware to the end point. There are plenty of utilities one can use to block these types of attacks. Pi-Hole and OpenDNS provide free and paid services to help mitigate these types of attacks.

I have been working on a project, called Svart Hal, which takes advantage of ISC BIND’s Response Policy Zones (RPZ). In the coming days I will post how an individual or organization can take advantage of the use of RPZ and the scripts which are provided free and Open Source. Users will be able to utilize this set up to save on the cost of spending hundreds or even thousands of dollars on traditional web filtering services. For those currently using RPZ with BIND, and looking for a inexpensive ways to protect your users, please take advantage.

Check out Svart Hal on GitHub

Continuous Battle Over Encryption And Your Privacy

Privacy vs. Services

The NCTA, CTIA, and US Telecom recently sent an open letter to congress with concerns over Google’s implementation of the DNS over HTTPS (DoH) protocol. The DoH protocol allows for encryption of DNS look ups providing additional privacy on the internet. In the letter the companies state that internet service providers provide functionality including, “(a) the provision of parental controls and IoT management for end users; (b) connecting end users to the nearest content delivery networks, thus ensuring the delivery of content in the fastest, cheapest, and most reliable manner; (c) assisting rights holders’ and law enforcement’s efforts in enforcing judicial orders in combatting online privacy, as well as law enforcement’s efforts in enforcing judicial orders in combatting the exploitation of minors.” (pg. 3).

Monopolizing on DNS

Another concern that was stated in the letter is that Google will be able to monopolize on the queries made to their DNS servers. As businesses or households use a particular DNS hosting provider, that provider has the ability to collect the IP address and DNS search results. This would allow that provider to mine the data it has collected and sell it to marketing firms. The companies also state that since all Google made devices would utilize Google’s DNS service, it would cause a single point of failure (That is why Google has two DNS servers; 8.8.8.8 and 8.8.4.4).

Another Attack Against Encryption

U.S. Attorney General William Barr has once again made a plea with tech giant Facebook to create a backdoor into its end-to-end encrypted messaging platforms (i.e. Facebook Messanger, WhatsApp). Barr is not alone, the United Kingdom and Australia have also come out against using such end-to-end encryption. They state that without a backdoor, law enforcement cannot perform their duties in capturing and prosecuting criminals in court. Yet, law makers do not understand that encryption algorithms are completely free and open source. Anyone can do a simple search online and discover the math behind popular, and albeit strongest, encryption we have today. If such backdoors were put in place, privacy advocates will certainly use other tools, like Signal, to protect their secrecy online.

Protecting Privacy online

There are many things that one can do to protect their privacy online. The Electronic Frontier Foundation has published a number of articles on how can maintain privacy. Their Surveillance Self-Defense is a well documented series of articles that you can use to protect your privacy and security while online. From enabling multifactor authentication, creating strong passwords, using password managers, to “Choosing the VPN That’s Right for You.”

BIND And DNS-Over-HTTPS

Privacy and security professionals have been pushing for encryption of internet traffic for many years now. Not only has there been a significant push from the privacy community, search engine giants like Google almost force websites to use encryption to increase search engine optimization (SEO) to drive higher results. Though the costs of purchasing Transport Layer Encryption (TLS) can be quite expensive, open source projects such as Let’s Encrypt allow anyone to create a publicly acceptable TLS certificate for free. These certificates are accepted by major browsers, without throwing warnings, and protects the privacy of the user accessing the site. This only resolves half of the problem.
In a recent article released by the Electronic Frontier Foundation (EFF), DNS is one of the biggest internet privacy issues facing home and corporate users. In its current implementation, DNS relays queries in clear text. This allows Internet Service Providers (ISP’s), or anyone inline of your internet traffic, to look at DNS queries and begin to build a profile on you.

Why is this a problem?

Prior to accessing a website, regardless whether or not it uses encryption, your computer performs a DNS lookup to find the IP address of the website you are trying to access. For instance, a simple DNS request to look up where cnn.com resides goes out over plain text for anyone to see. Then once the computer knows the IP address it is supposed to be access, the web browser makes a request to the website over a TLS connection.

A person sniffing the traffic may not at that point know the contents of the website you are looking at, the individual however does know that you accessed cnn.com. As you might imagine, this is a significant privacy issue where an ISP or the like can then build a profile on you and sell it to third party marketers which can then target ads.

Solving it on the individual level

There are plenty of tools out there that individuals can use to protect their internet traffic. From applications which can be loaded on a computer or smart device to the use of VPN’s and Tor, these can all protect a specific person. What if you wanted to protect a household or an organization? Use of individual applications would be cumbersome to have everyone use individual applications.

BIND with DNS-over-HTTPS

One such was to do this is to set up a Bind DNS server. This will allow everyone in the organization to perform DNS queries and have those queries safeguarded from data mining. However this still allows someone to sniff DNS queries as they are sent in clear text. To overcome this problem we need to install ‘cloudflared’ on the server. The cloudflared service will then perform DNS-over-HTTPS queries, encrypting your internet traffic from the Bind server to Cloudflare’s DNS resolvers. This prevents anyone from sniffing your DNS traffic, allowing for additional anonymity on the internet.

Getting your system ready

First you will need to install and configure Bind on your server. Once that is complete, download and install the [cloudflared][cloudflared] application on the server. After installation you will need to make one minor change to the forwarders section in your `named.conf.options` file. First, remove the comments in front of forwarders, these will be in the form of double forward slashes – //

Next, add the port number to the loopback IP address. The configuration will then look like:

`forwarders { 127.0.0.1 port 54; };`

After that, load up Wireshark and take a look at the traffic. You should no longer see the DNS protocol being used as everything will be running over TLS.

Am I fully protected now?

No. Though you are one step closer, you still need to ensure that you are performing your due diligence when accessing websites on the internet. Be careful when accessing websites that do not use encryption, especially when typing in your username and password. Use multifactor authentication in addition to a password manager and always double check the website you are accessing.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén