IT. SECURITY. OPEN SOURCE.

Author: jason Page 1 of 3

executive buy-in

Importance of Business Goals and Objectives for Information Security

I recently accomplished a long-time goal of mine, to become a published author. I had the opportunity to write for Packt Publishing which is a wonderful company to work with. In my book, Executives Cybersecurity Program Handbook, I cover quite a bit of content. From information security program development, IT governance, and infrastructure security. However, one of the more important topics throughout the book is the need to ensure that you are aligning the information security program against organizational goals and objectives. It is also important to establish coworker relationships early on in your tenure. This is to make it easier for you as the head of security to work through issues you may run into either now or in the future.

Coworker relationships

It does not matter whether you are a seasoned CISO or a security analyst, it is important to establish relationships early on with your coworkers. These relationships will help establish a rapport with those you work with on a daily occurrence. Many times information security is looked upon as the department of ‘No!’ Meaning that the information security department more than likely holds up projects, causing delays. We must be an enabler for the business!

Building these relationships will promote healthy discussions around information security. It will assist with breaking down barriers between information security, IT, and the business. Coworkers will feel comfortable explaining their hardships when changes are made or new policies are written.

Aligning with the business

Another important aspect of your position is to build relationships with senior management. These relationships will eventually move you toward a better understanding of business goals and objectives. Once you understand what is important to the organization, you can begin to craft your security program around what is important.

Without this step, you might possibly be tackling non-issues, wasting time, money, and effort. All too often I have seen companies be directed by managed security service providers to implement changes or purchase new IT resources without understanding business goals. This can lead to frustration and doubt about the effectiveness of controls.

Getting executive buy-in

Once you have established relationships with your coworkers and understood business objectives, it is time to get executive buy-in. This buy-in is needed to ensure that your information security strategy aligns with the business goals. Without buy-in from the business, your program could go nowhere. This is one of the most important steps and should not be overlooked. Many people say that the budget will make or break the department. Without executive buy-in, you may never get the budget.

Where do we go from here?

You must crawl before you can walk. Building relationships, aligning with business goals and objectives, and getting executive buy-in are what is needed during this initial stage.

Remember, build relationships early on. Hold 1 on 1’s with your staff and fellow managers, directors, and C-suite employees. Build a rapport with them so they can come to talk with you without hesitation.

When building relationships, ask questions to familiarize yourself with how the business functions. Understand what is important and how it conducts operations. This will help you in understanding what is important to tackle first.

As you work on building relationships and understanding the business, work with the executive teams to get buy-in. Without buy-in, the department will ultimately go nowhere.

Terraforming Cloudflare

In my previous post, I walked you through configuring AWS Route53 with DNSSEC and Terraform. Cloudflare, a SaaS security provider, provides a number of different services including web application firewall (WAF), DNS, load balancing, and zero trust. By utilizing Terraform with Cloudflare, you can automate a number of infrastructure services just like those offered at AWS.

What is Cloudflare

Cloudflare is a major security services provider. Known for their DNS and WAF services, they provide additional security services such as DDoS mitigation, zero trust, caching, and CDN. Cloudflare is also a domain registrar allowing customers to buy or transfer their domains to the service. Another benefit of using Cloudflare is their free TLS certificates. By utilizing their services, you can leverage their TLS certificates to ensure privacy and security of your site.

They provide these services by proxying web traffic. When configuring the service, you provide your FQDN and the origin’s IP address. Once configured, Cloudflare will advertise their IP address instead of the origin’s IP. This forces web traffic to run through their datacenters to scrub any malicious traffic before heading to your website.

Terraforming DNS Records

The Terraform syntax used for Route53 records are a little different than those used with Cloudflare but easy to use. First we create our zone record:

data "cloudflare_zone" "example_domain" {
  name = "example.com"
  account_id = "xxxxxxx"
}

Next we create the A record. The syntax below configures the origin IP address of 1.1.1.1 to point to example.com. To force the origin’s IP address to be proxied, set the value to true.

resource "cloudflare_record" "example_a_record" {
  zone_id = data.cloudflare_zone.example_domain.id
  name = "example.com"
  type = "A"
  ttl = "1"
  value = "1.1.1.1"
  proxied = true
}

Now create a CNAME record which points www to example.com

resource "cloudflare_record" "example_www_cname_record" {
  zone_id = data.cloudflare_zone.example_domain.id
  name = "www"
  type = "CNAME"
  value = "example.com"
  ttl = "1"
  proxied = true
}

To ensure the security of your website, you should have both the A and CNAME records proxied. This will prevent those trying to find your origin IP address. Further examples for TXT and MX records can be found in my GitLab repo.

Enhancing Security Through Terraform

Cloudflare also allows you to create firewall rules and configure security enhancements through Terraform. First, lets create the file for the security enhancements.

resource "cloudflare_zone_settings_override" "example_security" {
    zone_id = data.cloudflare_zone.example_domain.id
    settings {
        brotli = "on"
        ssl = "full"
        waf = "on"
        automatic_https_rewrites = "on"
        min_tls_version = "1.2"
        hotlink_protection = "on"
        http2 = "on"
        http3 = "on"
        mirage = "on"
        webp = "on"
        security_header {
            enabled = true
            nosniff = true
        }
    }
}

This syntax ensures that TLS v1.2 encryption is being used between the user and Cloudflare and then from Cloudflare to our origin server. It also provides protections for your images and prevents those who wish to hotlink images from your site to theirs.

Cloudflare Firewall Rules

Lastly, we need to configure firewall rules to enhance the security of our website. There are a number of ways to enhance the security of a WordPress website, those being

  • XML RPC attacks
  • Spamming your site with comments
  • Geographic IP blocks

To put these mitigations in place, we first need to create the firewall filters. To do so, create the following:

resource "cloudflare_filter" "example_geoip_filter" {
    zone_id = data.cloudflare_zone.example_domain.id
    description = "GeoIP Blocks"
    expression = "(ip.geoip.country eq \"RU\")"
}

resource "cloudflare_filter" "example_xmlrpc_filter" {
    zone_id = data.cloudflare_zone.example_domain.id
    description = "Block XML RPC"
    expression = "(http.request.uri.path contains \"/xmlrpc.php\")"
}

resource "cloudflare_filter" "example_direct_comments_filter" {
    zone_id = data.cloudflare_zone.example_domain.id
    description = "Block Direct Requests To WP Comments"
    expression = "(http.request.uri.path eq \"/wp-comments-post.php\" and http.request.method eq \"POST\" and http.referer ne \"example.com\")"
}

Next create the firewall rules

resource "cloudflare_firewall_rule" "example_geoip_rule" {
    zone_id = data.cloudflare_zone.example_domain.id
    description = "GeoIP Blocks"
    filter_id = cloudflare_filter.example_geoip_filter.id
    action = "block"
    priority = "1"
}

resource "cloudflare_firewall_rule" "example_xmlrpc_rule" {
    zone_id = data.cloudflare_zone.example_domain.id
    description = "Block XMLRPC"
    filter_id = cloudflare_filter.example_xmlrpc_filter.id
    action = "block"
    priority = "2"
}

resource "cloudflare_firewall_rule" "example_comments_rule" {
    zone_id = data.cloudflare_zone.example_domain.id
    description = "Block Direct Requests To WP Comments"
    filter_id = cloudflare_filter.example_direct_comments_filter.id
    action = "block"
    priority = "3"
}

The syntax for the rule set is straightforward. The “filter_id” is used to identify the filter rule created above. Action, what you would like to do with the filter which in this instance is “block.” “Priority” is where in the rule set this filter should be applied to. For instance, these rules go in order, “1”, “2”, “3.” This means the first rule is the GeoIP block, then XMLRPC, and then the comments filter. If you do not place the “priority” syntax in the stanza then Terraform will put the filters in whatever order it chooses.

Wrapping It Up

There are plenty of other Cloudflare configurations that can be scripted for Terraform. For the full list, check out the Cloudflare Provider Terraform documentation.

Configuring DNSSEC With Terraform and AWS Route 53

Why Enable DNSSEC?

The Domain Name Service (or DNS) has been apart of the internet since the 1980’s by combinging names to IP addresses together. Though there have been a few blog posts about DNS, we never discussed how to provide authentication to the responses you receive. Providing DNS services without authentication could lead someone to a rogue or spoofed internet site. This is where DNSSEC comes in.

A common attack vector against DNS servers is known as DNS Cache Poisoning. This happens when an attacker is allowed to send updates to zone files which do not belong to them. For instance, if I were allowed to send updates to a DNS server that belongs to Google, I could then point destination traffic to a host in my possession. I would then be able to set up sites which look identical to those of Google and steal your username and password, or capture what you look up online.

What DNSSEC Is Not

The activation of DNSSEC on your domain provides authenticity of the zone records, it does not provide confidentiality. DNSSEC does not encrypt your DNS traffic to and from the resolver. To provide encryption you must use the DNS over HTTPS (DOH), or DNS over TLS (DOT) protocol. These protocols protect your online privacy by encrypting DNS traffic to the resolver. This prevents Internet Service Providers (ISP) from sniffing your traffic.

You can also combine DOH/DOT and DNSSEC which will provide privacy and authenticity of the resolver. This is a new concept and only a few DNS providers are performing this type of service.

Getting Started

First you need to check if your Top Level Domain (TLD) supports DNSSEC.

If you own a .com domain, you are able to activate DNSSEC. For other TLD’s, please check the ICANN website.

Creating the KMS Key and Policy

The following is a template which can be used to create the KMS key and JSON policy. This will create an ECDSA P256 asymmetric key with sign and verify capabilities. Remember, DNSSEC does not provide privacy so all we need is to sign and verify the domain.

resource "aws_kms_key" "domaindnssec" {
  customer_master_key_spec = "ECC_NIST_P256"
  deletion_window_in_days  = 7
  key_usage                = "SIGN_VERIFY"
  policy = jsonencode({
    Statement = [
      {
        Action = [
          "kms:DescribeKey",
          "kms:GetPublicKey",
          "kms:Sign",
        ],
        Effect = "Allow"
        Principal = {
          Service = "dnssec-route53.amazonaws.com"
        }
        Sid      = "Allow Route 53 DNSSEC Service",
        Resource = "*"
      },
      {
        Action = "kms:CreateGrant",
        Effect = "Allow"
        Principal = {
          Service = "dnssec-route53.amazonaws.com"
        }
        Sid      = "Allow Route 53 DNSSEC Service to CreateGrant",
        Resource = "*"
        Condition = {
          Bool = {
            "kms:GrantIsForAWSResource" = "true"
          }
        }
      },
      {
        Action = "kms:*"
        Effect = "Allow"
        Principal = {
          AWS = "*"
        }
        Resource = "*"
        Sid      = "IAM User Permissions"
      },
    ]
    Version = "2012-10-17"
  })
}

Next, we define the zone and tie the keys to the zone for signature and verification.

data "aws_route53_zone" "example" {
  name = "example.com"
}

resource "aws_route53_key_signing_key" "dnssecksk" {
  name = "example.com"
  hosted_zone_id = data.aws_route53_zone.example.id
  key_management_service_arn = aws_kms_key.dnssecksk.arn
}

resource "aws_route53_hosted_zone_dnssec" "example" {
  depends_on = [
    aws_route53_key_signing_key.dnssecksk
  ]
  hosted_zone_id = aws_route53_key_signing_key.dnssecksk.hosted_zone_id
}

Verify That DNSSEC Is Working

Use dig to verify that DNSSEC is working on the domain. To get started use the following command:

dig +short +dnssec example.com

Note that the resolver being used must be capable of providing DNSSEC look ups. To verify, run the dig command against a known DNSSEC service provider like Cloudflare.

$ dig +short +dnssec cloudflare.com. @1.1.1.1
104.16.133.229
104.16.132.229
A 13 2 300 20220104184526 20220102164526 34505 cloudflare.com. T+hHkJPzWpqYHlh9qkTz9/YUzdOdOlmj5WhDytndJTEqqd9v3KJDz+Qx L1iV2ZhgvSUnV/YhPC4ccIJitS2y8A==

Now commit your code and you are all set.

Writing Policies, Standards, and Procedures for Your Next IT Assessment

Writing policies can be hard, writing good policies can be even harder. Though writing policies, standards, and procedures are often last on people’s minds, they are however a necessity. When building a cybersecurity program, not only are technical controls important but policies, standards, and procedures (PSP) are required documents for any information technology assessment. Whether it is for ISO 27000, NIST, SOX, or any other type of certification or framework, auditors will want review your documentation.

Auditors want to ensure that your PSP’s back up your IT program. They will evaluate how the technology is configured and implemented. The auditor will review your PSP’s to ensure that IT governance has been established for the organization. The following describes what policies, standards, and procedures are and how to craft a document.

Policies

Policies are overarching documents which provide an overview of the control objective. They are high level documents which establish IT governance and intent of applying administrative and technical controls. Policies are high level, allowing anyone to view the document without giving away specific technologies or how they are implemented.

“Policy” is often confused for any written document, which include standards and procedures. Policies do not go into detail of how a particular technology is configured or the process of how to do it. The document is used to show intent that these are put in place.

Standards

Standards are medium to low-level documents which depict how a device is configured or acceptable technologies to use. A standard must detail the levels of adequate encryption or what SPF, DKIM, or DMARC settings must be configured for email. Standards are used to backup what was previously stated in the policy.

A standards document what is going to be done without going into detail of how to do it. That is the responsibility of a procedure document. A standard would go into details of acceptable encryption. A procedure would then detail how to configure it for a piece of software. Standards state that guest accounts should be disabled, the procedure would depict how to disable it.

Procedures

Would your co-workers know how to perform your job if you were to win the lottery? Procedures must be written so that anyone can follow and understand. It should be simple enough that a new hire, or a junior admin could perform a function without much trouble. Procedures focus on how a job function is performed without getting into details of what is acceptable. They can be in the form of swim lanes or procedural, however the document must state who is responsible for what job function.

Policies, Standards, and Procedures Documentation Layout Framework

All too often I encounter an organization that has combined their PSP’s into a 30 – 100 page document. This makes searching what you are looking for extremely difficult, especially when you need to locate a document for a control. This is where breaking up documents is helpful. When you follow the methodology that procedures are written to back up standards, and standards are written to back up policies then you are on the right track. The Open Policy Framework provides a way to accomplish this.

When looking at the framework layout, the policy is on the left hand side with its subsequent standards and procedures falling beneath it. An example would be, document 100.00 is the Information Security Policy with each standard and procedure below. Acceptable Use is 100.01, Encryption 100.03, and so on. Procedures would follow the same numbering scheme with 100.03.01 being a procedure of how to configure an Apache server for instance. This numbering scheme allows one to easily locate the information needed.

Writing policies, standards, and procedures may be the non-sexy side of information technology or security but it is a necessity. When scoring a maturity level for an organization, not only will an organization need to have appropriate technical safeguards in place but also documentation to back it up.

Building a Successful Cybersecurity Program

Over the years I have had the opportunity to develop successful cybersecurity programs for many organizations. When creating a cybersecurity program, an organization must know where it is at and where it wants to go. Though not a requirement, cybersecurity frameworks or standardization documents written by experts in the field are helpful in designing a program and roadmap. Without this, the cybersecurity program will have no direction and will not achieve the organizations goals.

Get Executive Buy-In

You were just hired into an organization or promoted to work on their cybersecurity program. Congratulations! Now what? You need to gain executive buy-in for the program. This can be easy or extremely challenging. Chances are if you were hired as their first cybersecurity employee, the organization is taking this seriously. However, this does not mean it will be smooth sailing. You may need to teach cybersecurity best practices to those in the executive suite. This will ensure that the program is well understood and can be prioritized within the organization.

In addition to gaining executive buy-in for the program, you need to provide reoccurring status updates. This can be in the form of sending out email statements to having monthly meetings with executive stakeholders. This is needed to provide updates on where the organization is at with the cybersecurity program. It is also a perfect time to solicit feedback from the leadership team of how they see the program has progressed, any shortcomings, or express any concerns they may have. A continuous feedback loop is needed to ensure that the security program is meeting objectives set out by the business.

Pick a Cybersecurity Framework

There are plenty of cybersecurity frameworks to choose from, but which one is right for your organization? First, you must decide whether you need to aim for a certification for a given framework such as the ISO 27000 series. If certification is not a top priority, you can choose from some of the other well known cybersecurity frameworks such as the NIST Cybersecurity Framework or those developed by the Australian Cyber Security Centre.

For organizations who are just starting off and looking for a well rounded framework to choose I recommend using both the NIST Cybersecurity Framework along with the Centers for Internet Security Top 20 Security Controls. Why two you ask? The NIST Cybersecurity Framework is a great framework to standardize the organizations administrative controls. These would be considered your policies, standards, procedures, and guidelines. The Centers for Internet Security Top 20 Security Controls is a framework for your technical controls. These controls aim at how servers or networks are configured, having antivirus deployed, or a robust patching cadence. These two frameworks complement each other well and provide the foundation for how you want your security program.

Perform An Audit

The audit stage is critical as it will determine the outcome of your organizations current and future states for the cybersecurity program. Take plenty of time to review the selected framework as this will guide you through the audit process. Prior to performing the audit you must gather as much information as possible about the organization and its IT resources. Gathering documentation, architectural drawings, application flow drawings, policies, standards, and procedures, along with reviewing previous audits will help you gain insight into the environment.

Once the collection and review of documentation is complete, you then begin the interview process. The interview process is designed to gain additional insight into the environment that was not discovered during the document collection phase. When performing the interviews one must first consider their audience. Are they technical or non-technical? Will they understand what is being asked? Taking that into consideration will assist in getting to the answers you are looking for.

Determine Your Current State

When the audit is complete, it is time to perform a Gap analysis. The Gap analysis will help to determine the current state of your cybersecurity and information technology programs. Cybersecurity objectives from the framework that the organization met mean you can close that objective out. Any deficiencies found during the audit will become findings. The contrast between the objectives that you meet versus the ones you do not will be the outcome of the Gap analysis. This analysis will be the current state of your program. This current state report is what is to be presented to senior management or a steering team committee.

Develop A Future State Cybersecurity Roadmap

Now that the current state is defined, its time to define the strategic roadmap or future state of your cybersecurity program. The future state is where you would like to see your cybersecurity program 6 months – 5 years out. The Gap analysis performed when developing the current state will help define this for you. Objectives which are easy to implement can be put in place fairly quickly, ones that take a fair amount of planning and funding to implement will be placed on a strategic roadmap for future implementations.

Determining where to start can be daunting however there are two places to look for assistance. If you chose to use the Centers for Internet Security framework, this is already laid out for you. The Top 20 controls are in order of what should be implemented within the environment. The first 6 controls are what they call, “Basic.” As the name implies, these controls build the basic controls of the program which include:

Centers for Internet Security – Basic Controls

  1. Inventory and Control of Hardware Assets
  2. Inventory and Control of Software Assets
  3. Continuous Vulnerability Management
  4. Controlled Use of Administrative Privileges
  5. Secure Configuration for Hardware and Software on Mobile Devices, Laptops, Workstations and Servers
  6. Maintenance, Monitoring and Analysis of Audit Logs

If you decided to choose a different framework do not worry, there are other ways to determine your starting point. This will include the help of the Board of Directors or executive management. By understanding the business requirements, needs, and wants will drive the direction of your cybersecurity program, however push back where it makes sense.

Restrict AWS Console Access Based On Source IP Address

Zero trust, or risk-based authentication, can be hard to achieve (You can read more about it here). Organizations must trust the identity being used and the location from where the user is authenticating. Many cloud-based services, like AWS, have functionality built in to help protect your account. This is a must in preventing account takeover (ATO) while protecting the confidentiality, integrity, and availability of your AWS systems.

AWS’ built-in tools help protect your account which is easy to use. It is an automated process to validate that your Root account has multifactor authentication turned on, the Root account does not have programmatic access, etc. One function that is missing from the GUI is protecting accounts from untrusted networks. To do this, go to IAM and click on Policies. Create a new policy and use the JSON editor to paste the following:

{
  "Version": "2012-10-17",
  "Statement": {
    "Effect": "Deny",
    "Action": "*",
    "Resource": "*",
    "Condition": {"NotIpAddress": 
    {"aws:SourceIp": [
      "Source IP Address"
    ]}}
  }
}

Replace “Source IP Address” with your source IP address(es) of your corporate network.

Once the policy has been created, attach the policy to either a user account or a group that users are apart of. Now when someone tries to log in, from outside the network, the person will receive an “Access Denied” while trying to access any AWS resources.

For the latest on this policy, or other AWS policies, please check out the GitHub Repo.

PyMyDB – Simplifying MySQL Backups

Let’s face it, when was the last time you performed a database backup? How about any backup??? PyMyDB was written to help ease the burden of performing MySQL and MariaDB database backups.

Statistics have shown that businesses do not perform regular backups on their IT resources. Worse yet, many businesses close up shop after a major catastrophe due to inadequate backups. What does your backup solution look like?

Written in Python3 and using Boto3 and MySQLdb Python modules, PyMyDB connects to the server, queries all databases that the user has access to, and then performs a MySQL dump. To connect to the database, you must have an account with Amazon Web Services and use Secrets Manager. Secrets Manager is a simple tool allowing you to securely store any number of secrets.

The script looks for three stored secrets in Secrets Manager, username; password; and hostname. Once that information is saved in the system, grab the storage location and region name and plug that into the script. When working properly, the script will perform a backup of all the databases stored on the server.

The script can be found at: https://github.com/jasonbrown17/pymydb

***Update 20201021***

The PyMyDB script now has S3 upload functionality. Add the S3 bucket name to the variable for offsite backups.

Building Zero Trust in Authentication

Building Zero Trust

When you think of zero trust, you tend to think of network segmentation. Creating communities of interest (COI’s) and segmenting servers away from each other to prevent lateral movement within one’s network. Network segmentation is just the first step in a zero-trust model. Others include authentication, segregation of duties, and cryptographic certificates. Though all are important, authentication is a difficult one to get right.

Secure Shell, or SSH, is an authentication mechanism used for remote connectivity mainly to UNIX and Linux based operating systems. SSH creates a secure, encrypted, connection between the administrator’s endpoint to a server. Though SSH is heavily used for connectivity it does have one major flaw, you must trust the certificate presented to you upon the first login. Trust On First Use, or TOFU, requires the administrator to initially trust the server they are connecting to without knowing the validity of the certificates being presented. Once trust is given to an unknown certificate, the administrator is allowed to continue with their username and password.

Trust On First Use

TOFU is nothing new in terms of security and is widely used throughout information technology. Take for instance setting up a new firewall or other types of security appliances. Security appliances are designed to encrypt administrative connections to the management plane. Most, if not all major players use self-signed certificates in order to encrypt that communication. This is another example of TOFU where a firewall administrator must first accept an unknown security certificate prior to being allowed to connect to the device. Accepting unknown, or unverified, TLS certificates are something we tell end users not to do all the time. So why is it Ok for us to do so? How do we know whether our connections are trusted and not being compromised by a man in the middle attack?

How Do I Trust My Connections?

Unfortunately, there is no easy answer to this question. Systems are designed to create new private and public certificates upon installation. This is to ensure that no two private certificates are identical and not re-used between systems. When these certificates are generated for the first time, an administrator has no other option but to trust the certificate being handed out.

Cryptographically signing certificates is one such way that could be used to overcome this problem. This is performed on software package installation and patching. Microsoft, Red Hat, Ubuntu, and Apple all cryptographically sign their software. This prevents the operating system from installing applications that could have been created by a malicious user, potentially infecting one’s machine.

Organizations can perform the same level of trust for authentication. Creating an internal Public Key Infrastructure, or PKI, can reduce that uncertainty. PKI’s can provide validity in those connections as one has to first trust the root certificate authority. Once trust has been established, certificates generated and signed by the root certificate authority will also be trusted by the system. Though getting it right the first time might seem to be difficult. Once established, it can prove to be one of the best assets security professionals have in their arsenal.

BIND Response Policy Zones

Domain Name Service

Accessing resources across the internet is done through the use of IP addresses. When trying to access your email, Google for searching, or your favorite social media outlet, you are making a connection to the their IP address. The Domain Name Service (DNS) converts a name to an IP, allowing you to easily remember your favorite website. For instance, DNS will covert www.amazon.com to 89.187.178.56. How could we use BIND and DNS to thwart the bad guys?

Response Policy Zones

One of the best, unknown features of BIND is its use of Response Policy Zones (RPZ). RPZ’s allow an administrator to re-write a DNS query and send it back to the user. In the example above, when a user goes to access Amazon, DNS converts a name to a number. Once the web browser knows that number, it then reaches out to the server to access its resources. What if we were to manipulate that number, or make it where Amazon did not exist to our users?

This is where the functionality of RPZ’s come in. By configuring BIND to receive a DNS recursive lookup and manipulate the response back to the user, you can effectively stop users from accessing malicious sites.

Let us look at the recent privacy and security concerns related to Zoom. Due to its popularity and ease of use, the Zoom video conferencing service has not become a front runner. Not only has Zoombombing, where an uninvited user gains access to your video sharing stream, become a headache for the service but so has phishing websites. Recently, URL’s listed as zoompanel.com and zoomdirect.com.au have sprung up. These websites are used to phish a users Zoom credentials. We can use RPZ’s to block company personnel or home users from accessing those websites, mitigating the attack.

How Do RPZ’s Work?

When properly configured, a BIND RPZ file will return a different IP address than the one that is published on the internet. The following will return a valid IP address for zoomdirect.com.au.

nslookup zoomdirect.com.au 8.8.8.8
Server: 8.8.8.8
Address: 8.8.8.8#53


Non-authoritative answer:
Name: zoomdirect.com.au
Address: 119.81.45.82

The query responded with the IP address of 119.81.45.82.

What does a RPZ response look like?

nslookup zoomdirect.com.au ns1.svarthal.net
Server: ns1.svarthal.net
Address: 45.19.203.68#53

** server can’t find zoomdirect.com.au: NXDOMAIN

From the example above, the response changed from 119.81.45.82 to NXDOMAIN. This means that the response came back with nothing, making the phish server non-existent.

DNS Architecture

Deploying a local BIND DNS server for an organization can be quite daunting. There are a multitude of options available within the configuration of the service. Though secure configurations are extremely important, one must not overlook how to architect its set up within a network. Architecting the service correctly from the start will ease configuration headaches further down the road.

Network Segmentation

Network segmentation must be considered when standing up new systems. Segmentation is performed through the use of a firewall or proxy device which restricts network traffic. This restriction provides necessities by blocking unused protocols, access to network ports, and access from systems and services outside of the segment.

To segment services properly, an organization can either segment individual systems from each other or by creating a Community Of Interest (COI). A COI is the combination of like systems within a network segment. For example, a DNS COI is the creation of a network segment where all of the organization’s DNS servers reside. Though this is not as secure as segmenting each DNS server away from each other, this is more secure than placing all systems within the same, flat, network segment. Architectural diagram is shown below:

Primary DNS

When deploying any DNS service, whether it is BIND or a different system, the primary DNS server must be as secure as possible. The primary server is the source of all your DNS naming information. If someone were to gain access to this server, or if the system is misconfigured, they would have the possibility to change DNS records which would allow the attacker to redirect users to phishing sites. This is why an organization must deploy two or more secondary servers.

Hidden Primary

In a hidden primary configuration, the primary DNS server is never exposed to anyone. The organization configures its DHCP server to advertise the secondaries and not the primary. Firewall rules restrict DNS traffic to only the secondary servers and disallow anyone from accessing the server outside of the IT department. This is to limit the exposure of the primary server. By not advertising the primary server, it further reduces the amount of information gathered by a potential attacker.

Secondary DNS

As the primary server is never advertised to the users, or the public, the secondary servers are exposed. These servers receive naming information from the primary server and are configured to allow lookups. Firewalls or DNS proxies must be configured to only allow DNS traffic to traverse the network and limit the number of requests, preventing certain types of attacks.

The hidden master architectural concept is not something new, however it is not well known. By limiting the exposure of the master server, allowing users to only access the secondaries, prevents a number of attacks an organization could face. This is an important initial step when planning for an initial DNS deployment, or redeployment, of the service. Getting this right starting out will help secure the rest of the configuration down the road.

Page 1 of 3

Powered by WordPress & Theme by Anders Norén