[Completed] Professional Google Cybersecurity Specialization C5/8; assets, threats, and vulnerabilities

Introduction to Course 5

What do you picture when you think about the security field?
This might make you think of a dark room with people hunched over their computers.
Maybe you picture a person in a lab carefully analyzing evidence.
Or, maybe you imagine a guard standing watch in front of a building.
The truth is, no matter what thoughts cross your mind, all of these examples are part of the wide world of security.

Hi, my name is Da'Queshia.
I have worked as a security engineer for four years.
I'm excited to be your instructor for this course and share some of my experience with you.
At Google, I'm part of a diverse team of security professionals who all have different backgrounds and unique perspectives.
For example, in my role, I work to secure Gmail.
Part of my daily activities include developing new security features and fixing vulnerabilities in the application to make email safer for our users.

Some members of my team began working in security after graduating from college.
Many others found their way into the field after years of working in another industry.
Security teams come in all different shapes and sizes.
Each member of a team has a role to play.
While our specific functions within the group differ, we all share the same objective: protecting valuable assets from harm.
Accomplishing this mission involves a combination of people, processes, and tools.
In this course, you'll learn about each of these in detail.

First, you'll be introduced to the world of asset security.
You'll learn about the variety of assets that organizations protect and how these factor into a company's overall approach to security.
Then, you'll begin exploring the security systems and controls that teams use to proactively protect people and their information.
All systems have weaknesses that can be improved upon.
When those weaknesses are neglected or ignored, they can lead to serious problems.
In this section of the course, you'll focus on common vulnerabilities in systems and the ways security teams stay ahead of potential problems.
Finally, you'll learn about the threats to asset security.
You'll also be introduced to the threat modeling process that security teams use to stay one step ahead of potential attacks.
In this field, we try to do everything possible to avoid being put in a compromised position.
By the end of this course, you'll have a clearer picture of the ways people, processes, and technology work together to protect all that's important.
Throughout the course, you'll also get an idea of the exciting career opportunities available to you.

Security truly is an interdisciplinary field.
Your background and perspective is an asset.
Whether you're a recent college graduate or starting a new career path, the security field presents a wide range of possibilities.
So what do you say?
Are you ready to go on this journey with me?

Course 5 overview

image.pngHello, and welcome to Assets, Threats, and Vulnerabilities, the fifth course in the Google Cybersecurity Certificate. You’re on an exciting journey!

By the end of this course, you’ll build an understanding of the wide range of assets organizations must protect. You’ll explore many of the most common security controls used to protect valuable assets from risk. You’ll also discover the variety of ways assets are vulnerable to threats by adopting an attacker mindset.

Certificate program progress

The Google Cybersecurity Certificate program has eight courses. Assets, Threats, and Vulnerabilities is the fifth course.

  1. — Learn about incident classification, escalation, and ways to communicate with stakeholders. This course closes out the program with tips on how to engage with the cybersecurity community and prepare for your job search.

Course 5 content

Each course of this certificate program is broken into weeks. You can complete courses at your own pace, but the weekly breakdowns are designed to help you finish the entire Google Cybersecurity Certificate in about six months.

What’s to come? Here’s a quick overview of the skills you’ll learn in each week of this course.

Week 1: Introduction to asset security

You will be introduced to how organizations determine what assets to protect. You'll learn about the connection between managing risk and classifying assets by exploring the unique challenge of securing physical and digital assets. You'll also be introduced to the National Institute of Standards and Technology (NIST) framework standards, guidelines, and best practices for managing cybersecurity risk.

Week 2: Protect organizational assets

You will focus on security controls that protect organizational assets. You'll explore how privacy impacts asset security and understand the role that encryption plays in maintaining the privacy of digital assets. You'll also explore how authentication and authorization systems help verify a user’s identity.

Week 3: Vulnerabilities in systems

You will build an understanding of the vulnerability management process. You'll learn about common vulnerabilities and develop an attacker mindset by examining the ways vulnerabilities can become threats to asset security if they are exploited.

Week 4: Threats to asset security

Finally, you will explore common types of threats to digital asset security. You'll also examine the tools and techniques used by cybercriminals to target assets. In addition, you'll be introduced to the threat modeling process and learn ways security professionals stay ahead of security breaches.

What to expect

Each course offers many types of learning opportunities:

Tips for success

to ensure that the learning community remains a welcoming, friendly, and supportive place for all members.

Da'Queshia: My path to cybersecurity

Hi. My name is Da'Queshia.
I'm a security engineer.
That basically means I work securing Google's products so users like you aren't vulnerable.
Before I entered cybersecurity, I worked installing Internet.
I also worked at a chip factory.
I worked in fast food.
I sold shoes at the mall.
I did a lot of things before I made it here.
A lot of what I learned in my past jobs I actually use every day.
Some of it is my soft skills like time management, people skills, and communication.
As a new cybersecurity analyst, it's important to be able to communicate, take feedback, and feel uncomfortable, not with the people around you, but with the problems you're trying to
solve because sometimes it requires you to think outside of the box and be challenged.
I would describe my job as a Google security guard because I work on the Gmail security team, it's my job to protect Gmail.
Some of those threats are people who are sending you bad emails, who are trying to get your user credentials or get you to click on a phishing link.
When it comes to vulnerabilities, some of those could be something like unsanitized input, which can lead to trouble.
My typical work day starts like everyone else.
I check my emails and then from there I go into my bug queue;
it's essentially when people tell me there's a problem with one of our products.
I start doing a little bit of research and then I like to explore the bug a little bit more.
I like to figure out if this can break this, can it also break this, and if it can, what else can I do with it?
Then from there, I look for a solution to make sure that I fix that hole and then any other
holes that we might have in our security.
Some of the things you learned about in this course is threat modeling, and that's something I use every day.
Whenever I get a bug, it's part of my job to figure out the attack tree and what type of vectors we use to take advantage of vulnerabilities.
No one is born knowing everything.
I know that sounds really cliche or like super obvious, but it helps me because it helps put some perspective the time
and effort that everyone has to put in in order to learn something new.
So be patient with yourself.
Don't let anyone discourage you from cybersecurity.
Taking this course is one step closer to getting into your goal.
Don't get discouraged now. Keep going.

Understand risks, threats, and vulnerabilities

When security events occur, you’ll need to work in close coordination with others to address the problem. Doing so quickly requires clear communication between you and your team to get the job done.

Previously, you learned about three foundational security terms:

  • Risk: Anything that can impact the confidentiality, integrity, or availability of an asset

  • Threat: Any circumstance or event that can negatively impact assets

  • Vulnerability: A weakness that can be exploited by a threat

These words tend to be used interchangeably in everyday life. But in security, they are used to describe very specific concepts when responding to and planning for security events. In this reading, you’ll identify what each term represents and how they are related.

Security risk

Security plans are all about how an organization defines risk. However, this definition can vary widely by organization. As you may recall, a risk is anything that can impact the confidentiality, integrity, or availability of an asset. Since organizations have particular assets that they value, they tend to differ in how they interpret and approach risk.

One way to interpret risk is to consider the potential effects that negative events can have on a business. Another way to present this idea is with this calculation:

Likelihood x Impact = Risk

For example, you risk being late when you drive a car to work. This negative event is more likely to happen if you get a flat tire along the way. And the impact could be serious, like losing your job. All these factors influence how you approach commuting to work every day. The same is true for how businesses handle security risks.

In general, we calculate risk in this field to help:

  • Prevent costly and disruptive events

  • Identify improvements that can be made to systems and processes

  • Determine which risks can be tolerated

  • Prioritize the critical assets that require attention

The business impact of a negative event will always depend on the asset and the situation. Your primary focus as a security professional will be to focus on the likelihood side of the equation by dealing with certain factors that increase the odds of a problem.

Risk factors

As you’ll discover throughout this course, there are two broad risk factors that you’ll be concerned with in the field:

  • Threats

  • Vulnerabilities

The risk of an asset being harmed or damaged depends greatly on whether a threat takes advantage of vulnerabilities.

Let’s apply this to the risk of being late to work. A threat would be a nail puncturing your tire, since tires are vulnerable to running over sharp objects. In terms of security planning, you would want to reduce the likelihood of this risk by driving on a clean road.

Categories of threat

Threats are circumstances or events that can negatively impact assets. There are many different types of threats. However, they are commonly categorized as two types: intentional and unintentional.

For example, an intentional threat might be a malicious hacker who gains access to sensitive information by targeting a misconfigured application. An unintentional threat might be an employee who holds the door open for an unknown person and grants them access to a restricted area. Either one can cause an event that must be responded to.

Categories of vulnerability

Vulnerabilities are weaknesses that can be exploited by threats. There’s a wide range of vulnerabilities, but they can be grouped into two categories: technical and human.

For example, a technical vulnerability can be misconfigured software that might give an unauthorized person access to important data. A human vulnerability can be a forgetful employee who loses their access card in a parking lot. Either one can lead to risk.

Key takeaways

Risks, threats, and vulnerabilities have very specific meanings in security. Knowing the relationship between them can help you build a strong foundation as you grow essential skills and knowledge as a security analyst. This can help you gain credibility in the industry by demonstrating that you have working knowledge of the field. And it signals to your future colleagues that you’re a member of the global security community.

Tri: Life in asset security

I'm Tri, a security engineer at Google.
My department is Detection and Response.
Let's see, what does my everyday look like?
Well, of course I have the free lunch and coffee, which is nice.
And then I finally get to my desk and I open up the SIEM to see what kind of exciting events are there for me to look into
and what threats there could be out there for me to analyze.
Also, I work on improving our analysis for detection of potential threats.
So my security passion developed at a young age.
I was a victim of a hack, believe it or not.
After school every day at that time I would go home and play a computer game.
One day I got home, I brought it up and it said, "Your CD key is in use by...", and then it gave some strange name there that I didn't recognize.
At first I felt shocked.
I had bought this game myself and somebody stole my CD key, but it did provide me this motivation to start to learn how to defend myself.
For example, I learned about manual removal of malware, and that became one of my favorite topics.
Also, for fun, I started doing some white hat hacker activity against some of my friends.
Asset security is a very important field, and there's many varieties of assets that you could be looking into to protect.
My favorite part is building the detections that actually have the potential to catch malicious behavior.
In asset management security, you have the ability to accurately inventory all of the assets which include IP, user data, employee machines, and to make sure you have a security posture that's on par with what you need.
There's always new technology coming on the scene, new hardware, and we are responsible for understanding what potential new threats are out there.
Problem solving ability and creative thinking is important in cybersecurity because there's always complex problems, and people need to be able to think outside of the box, think creatively, and think holistically as they approach their solutions to mitigate risks.
Cybersecurity is a noble occupation.
Many things can happen.
Many bad things can happen on the Internet, but we can be there to stand up against it and we can be there to do something about it.
We can be there to protect our users, or family members, or friends. 
That responsibility is heavy. 
But also, of course, it's a very important mission.
And I am proud to be within the security team.

Security starts with asset classification

It can be really stressful when you have trouble finding something important.
You're late to an appointment and can't find your keys!
We all find ourselves in situations like these at one time or another.
Believe it or not, organizations deal with the same kind of trouble.
Take a few seconds to think of the number of important assets you have nearby.
I'm thinking of my phone, wallet, and keys, for example.

Next, imagine that you've just joined a security team for a small online retailer.
The company has been growing over the past few years, adding more and more customers.
As a result, they're expanding their security department to protect the increasing numbers of assets they have.
Let's say each of you are responsible for 10 assets.
That's a lot of assets! 
Even in this small business setting, that's an incredible amount of things that need protecting.

A fundamental truth of security is you can only protect the things you account for.
Asset management is the process of tracking assets and the risks that affects them.
All security plans revolve around asset management.
Recall that assets include any item perceived as having value to an organization.
Equipment, data, and intellectual property are just a few of the wide range of
assets businesses want to protect.
A critical part of every organization's security plan is keeping track of its assets.

Asset management starts with having an asset inventory, a catalog of assets that need to be protected.
This is a central part of protecting organizational assets.
Without this record, organizations run the risk of losing track of all that's important to them.
A good way to think of asset inventories is as a shepherd protecting sheep.
Having an accurate count of the number of sheep help in a lot of ways.
For example, it will be easier to allocate resources, like food, to take care of them.
Another benefit of asset inventory might be that you'd get an alert if one of them goes missing.

Once more, think
of the important assets you have nearby.
Just like me, you're probably able to rate them according to the level of importance.
I would rank my wallet ahead of my shoes, for example.
In security, this practice is known as asset classification.
In general, asset classification is the practice of labeling assets based on the sensitivity
and importance to an organization.
Organizations label assets differently.
Many of them follow a basic classification scheme:
public, internal-only, confidential, and restricted.

Public assets can be shared with anyone.
Internal-only can be shared with anyone in the organization but should not be shared outside of it.
And confidential assets should only be accessed by those working on a specific project.
Assets classified as restricted are typically highly sensitive and must be protected.
Assets with this label are considered need-to-know.
Examples include intellectual property and health or payment information.
For example, a growing online retailer might mark internal emails about a new product as confidential because those
working on the new product should know about it.
They might also label the doors at their offices with the restricted sign to keep everyone out who doesn't
have a specific reason to be in there.
These are just a couple of everyday examples that you may be familiar with from your prior experience.

For the most part, classification determines whether
an asset can be disclosed, altered, or destroyed.
Asset management is a continuous process,
one that helps uncover unexpected gaps in security for potential risks.
Keeping track of all that's important to a organization is an essential part of security planning.

Common classification requirements

Asset management is the process of tracking assets and the risks that affect them. The idea behind this process is simple: you can only protect what you know you have. 

Previously, you learned that identifying, tracking, and classifying assets are all important parts of asset management. In this reading, you’ll learn more about the purpose and benefits of asset classification, including common classification levels.

A security professional taking inventory of a variety of boxes.

Why asset management matters

Keeping assets safe requires a workable system that helps businesses operate smoothly. Setting these systems up requires having detailed knowledge of the assets in an environment. For example, a bank needs to have money available each day to serve its customers. Equipment, devices, and processes need to be in place to ensure that money is available and secure from unauthorized access.

Organizations protect a variety of different assets. Some examples might include:

  • Digital assets such as customer data or financial records.

  • Information systems that process data, like networks or software.

  • Physical assets which can include facilities, equipment, or supplies.

  • Intangible assets such as brand reputation or intellectual property.

Regardless of its type, every asset should be classified and accounted for. As you may recall, asset classification is the practice of labeling assets based on sensitivity and importance to an organization. Determining each of those two factors varies, but the sensitivity and importance of an asset typically requires knowing the following:

  • What you have

  • Where it is

  • Who owns it, and

  • How important it is

An organization that classifies its assets does so based on these characteristics. Doing so helps them determine the sensitivity and value of an asset.

Common asset classifications

Asset classification helps organizations implement an effective risk management strategy. It also helps them prioritize security resources, reduce IT costs, and stay in compliance with legal regulations.

The most common classification scheme is: restricted, confidential, internal-only, and public.

  • Restricted is the highest level. This category is reserved for incredibly sensitive assets,  like need-to-know information.

  • Confidential refers to assets whose disclosure may lead to a significant negative impact on an organization.

  • Internal-only describes assets that are available to employees and business partners.

  • Public is the lowest level of classification. These assets have no negative consequences to the organization if they’re released.

How this scheme is applied depends greatly on the characteristics of an asset. It might surprise you to learn that identifying an asset’s owner is sometimes the most complicated characteristic to determine.

Note: Although many organizations adopt this classification scheme, there can be variability at the highest levels. For example, government organizations label their most sensitive assets as confidential instead of restricted.

Challenges of classifying information

Identifying the owner of certain assets is straightforward, like the owner of a building. Other types of assets can be trickier to identify. This is especially true when it comes to information.

For example, a business might issue a laptop to one of its employees to allow them to work remotely. You might assume the business is the asset owner in this situation. But, what if the employee uses the laptop for personal matters, like storing their photos?

Ownership is just one characteristic that makes classifying information a challenge. Another concern is that information can have multiple classification values at the same time. For example, consider a letter addressed to you in the mail. The letter contains some public information that’s okay to share, like your name. It also contains fairly confidential pieces of information that you’d rather only be available to certain people, like your address. You’ll learn more about how these challenges are addressed as you continue through the program.

Key takeaways

Every business is different. Each business will have specific requirements to address when devising their security strategy. Knowing why and how businesses classify their assets is an important skill to have as a security professional. Information is one of the most important assets in the world. As a cybersecurity professional, you will be closely involved with protecting information from damage, disclosure, and misuse. Recognizing the challenges that businesses face classifying this type of asset is a key to helping them solve their security needs.

The emergence of cloud security

One of the most significant technology developments this century has been the emergence of cloud computing. The United Kingdom's National Cyber Security Centre defines cloud computing as, “An on-demand, massively scalable service, hosted on shared infrastructure, accessible via the internet.”

Earlier, you learned that most information is in the form of data, which is in a constant state of change. In recent years, businesses started moving their data to the cloud. The adoption of cloud-based services has complicated how information is kept safe online. In this reading, you’ll learn about these challenges and the opportunities they’ve created for security professionals.

A cloud lifting a business out of a marketplace and into the sky.

Soaring into the cloud

Starting an online business used to be a complicated and costly process. In years past, companies had to build and maintain their own internal solutions to operate in the digital marketplace. Now, it’s much easier for anyone to participate because of the cloud.

The availability of cloud technologies has drastically changed how businesses operate online. These new tools allow companies to scale and adapt quickly while also lowering their costs. Despite these benefits, the shift to cloud-based services has also introduced a range of new cybersecurity challenges that put assets at risk.

Cloud-based services

The term cloud-based services refers to a variety of on demand or web-based business solutions. Depending on a company’s needs and budget, services can range from website hosting, to application development environments, to entire back-end infrastructure.

There are three main categories of cloud-based services:

  • Software as a service (SaaS)

  • Platform as a service (PaaS)

  • Infrastructure as a service (IaaS)

Software as a service (SaaS)

SaaS refers to front-end applications that users access via a web browser. The service providers host, manage, and maintain all of the back-end systems for those applications. Common examples of SaaS services include applications like Gmail™ email service, Slack, and Zoom software.

Platform as a service (PaaS)

PaaS refers to back-end application development tools that clients can access online. Developers use these resources to write code and build, manage, and deploy their own apps. Meanwhile, the cloud service providers host and maintain the back-end hardware and software that the apps use to operate. Some examples of PaaS services include Google App Engine™ platform, Heroku®, and VMware Cloud Foundry. 

Infrastructure as a service (IaaS)

IaaS customers are given remote access to a range of back-end systems that are hosted by the cloud service provider. This includes data processing servers, storage, networking resources, and more. Resources are commonly licensed as needed, making it a cost-effective alternative to buying and maintaining on premises.

Cloud-based services allow companies to connect with their customers, employees, and business partners over the internet. Some of the largest organizations in the world offer cloud-based services:

  • Google Cloud Platform

  • Microsoft Azure

Cloud security

Shifting applications and infrastructure over to the cloud can make it easier to operate an online business. It can also complicate keeping data private and safe. Cloud security is a growing subfield of cybersecurity that specifically focuses on the protection of data, applications, and infrastructure in the cloud.

In a traditional model, organizations had their entire IT infrastructure on premises. Protecting those systems was entirely up to the internal security team in that environment. These responsibilities are not so clearly defined when part or all of an operational environment is in the cloud.

For example, a PaaS client pays to access the resources they need to build their applications. So, it is reasonable to expect them to be responsible for securing the apps they build. On the other hand, the responsibility for maintaining the security of the servers they are accessing should belong to the cloud service provider because there are other clients using the same systems.

In cloud security, this concept is known as the shared responsibility model. Clients are commonly responsible for securing anything that is directly within their control:

  • Identity and access management

  • Resource configuration

  • Data handling

Note: The amount of responsibility that is delegated to a service provider varies depending on the service being used: SaaS, PaaS, and IaaS.

Cloud security challenges

All service providers do their best to deliver secure products to their customers. Much of their success depends on preventing breaches and how well they can protect sensitive information. However, since data is stored in the cloud and accessed over the internet, several challenges arise:

  • Misconfiguration is one of the biggest concerns. Customers of cloud-based services are responsible for configuring their own security environment. Oftentimes, they use out-of-the-box configurations that fail to address their specific security objectives.

  • Cloud-native breaches are more likely to occur due to misconfigured services.

  • Monitoring access might be difficult depending on the client and level of service.

  • Meeting regulatory standards is also a concern, particularly in industries that are required by law to follow specific requirements such as HIPAA, PCI DSS, and GDPR.

Many other challenges exist besides these. As more businesses adopt cloud-based services, there’s a growing need for cloud security professionals to meet a growing number of risks. Burning Glass, a leading labor market analytics firm, ranks cloud security among the most in-demand skills in cybersecurity

Key takeaways

So much of the global marketplace has shifted to cloud-based services. Cloud technology is still new, resulting in the emergence of new security models and a range of security challenges. And it’s likely that other concerns might arise as more businesses become reliant on the cloud. Being familiar with the cloud and the different services that are available is an important step towards supporting any organizations efforts to protect information online. 

Resources for more information

Cloud security is one of the fastest growing subfields of cybersecurity. There are a variety of resources available online to learn more about this specialized topic.

is a certificate program designed to teach you the foundational skills needed to become a cloud security specialist.

Elements of a security plan

Security is all about people, processes, and technology.
It's a team effort, and I mean that literally.
Protecting assets extends well beyond one person or a group of people in an IT department.
The truth of the matter is that security is a culture.
It's a shared set of values that spans all levels of an organization.
These values touch everyone, from employees, to vendors, to customers.
Protecting digital and physical assets requires everyone to participate, which can be a challenge.

That's what security plans are for!
Plans come in many shapes and sizes, but they all share a common goal:
to be prepared for risks when they happen.
Placing the focus on people is what leads to the most effective security plans.
Considering the diverse backgrounds and perspectives of everyone involved ensures that no one is left out when something goes wrong.
We talked earlier about the risk as being anything that can impact the confidentiality, integrity, or availability of an asset.
Most security plans address risks by breaking them down according to categories and factors.

Some common risk categories might include, the damage, disclosure, or loss of information. 
Any of these can be due to factors like the physical damage or malfunctions of a device.
There are also factors like attacks and human error.
For example, a new school teacher may be asked to sign a contract before their first day of class.
The agreement may warn against some common risks associated with human error, like using a personal email to send sensitive information.
A security plan may require that all new hires sign off on this agreement, effectively spreading the values that ensure everyone's in alignment.
This is just one example of the types and causes of risk that a plan might address.
These things vary widely depending on the company.
But how these plans are communicated is similar across industries.

Security plans consist of three basic elements: policies, standards, and procedures.
These three elements are how companies share their security plans.
These words tend to be used interchangeably outside of security, but you'll soon discover that they each have a very specific meaning and function in this context.

A policy in security is a set of rules that reduce risk and protects information.
Policies are the foundation of every security plan.
They give everyone in and out of an organization guidance by addressing questions like, what are we protecting and why?
Policies focus on the strategic side of things by identifying the scope, objectives, and limitations of a security plan.
For instance, newly hired employees at many companies are required to sign off on acceptable use policy, or AUP.
These provisions outline secure ways that an employee may access corporate systems.

Standards are the next part.
These have a tactical function, as they concern how well we're protecting assets.
In security, standards are references that inform how to set policies.
A good way to think of standards is that they create a point of reference.
For example, many companies use the password management standard identified in NIST Special Publication 800-63B to improve their security policies by specifying that employees' passwords must be at least eight characters long.

The last part of a plan is its procedures.
Procedures are step-by-step instructions to perform a specific security task.
Organizations usually keep multiple procedure documents that are used throughout the company, like how employees can choose secure passwords, or how they can securely reset a password if it's been locked.
Sharing clear and actionable procedures with everyone creates accountability, consistency, and efficiency across an organization.

Policies, standards, and procedures vary widely from one company to another because they are tailored to each organization's goals.
Simply understanding the structure of security plans is a great start.
For now, I hope you have a clearer picture of what policies, standards, and procedures are, and how they are essential to making security a team effort. 




The NIST Cybersecurity Framework

Having a plan is just one part of securing assets.
Once the plan is in action, the other part is making sure everyone's following along.
In security, we call this compliance.

Compliance is the process of adhering to internal standards and external regulations.
Small companies and large organizations around the world place security compliance at the top of their list of priorities.
At a high-level, maintaining trust, reputation, safety, and the integrity of your data are just a few reasons to be concerned about compliance.
Fines, penalties, and lawsuits are other reasons.
This is particularly true for companies in highly regulated industries, like health care, energy, and finance.
Being out of compliance with a regulation can cause long lasting financial and reputational effects that can seriously impact a business.

Regulations are rules set by a government or other authority to control the way something is done.
Like policies, regulations exist to protect people and their information, but on a larger scale.
Compliance can be a complex process because of the many regulations that exist all around the world.
For our purpose, we're going to focus on a framework of security compliance, the U.S. based NIST Cybersecurity Framework.

Earlier in the program, you learned the National Institute of Standards and Technology, or NIST.
One of the primary roles of NIST is to openly provide companies with a set of frameworks and security standards that reflect key security related regulations.
The NIST Cybersecurity Framework is a voluntary framework that consists of standards, guidelines, and best practices
to manage cybersecurity risk. Commonly known as the CSF, this framework was developed to help businesses secure one of their most important assets, information.
The CSF consists of three main components: the core, it's tiers, and it's profiles.
Let's explore each of these together to build a better understanding of how NIST's CSF is used.

The core is basically a simplified version of the functions, or duties, of a security plan.
The CSF core identifies five broad functions:
identify, protect, detect, respond, and recover.
Think of these categories of the core as a security checklist.

After the core, the next NIST component we'll discuss is its tiers.
These provide security teams with a way to measure performance across each of the five functions of the core.
Tiers range from Level-1 to Level-4.
Level-1, or passive, indicates a function is reaching bare minimum standards.
Level-4, or adaptive, is an indication that a function is being performed at an exemplary standard.
You may have noticed that CSF tiers aren't a yes or no proposition; instead, there's a range of values. 
That's because tiers are designed as a way of showing organizations what is and isn't working with their security plans.

Lastly, profiles are the final component of CSF.
These provide insight into the current state of a security plan.
One way to think of profiles is like photos capturing a moment in time.
Comparing photos of the same subject taken at different times can provide useful insights.
For example, without these photos, you might not notice how this tree has changed.
It's the same with NIST profiles.

Good security practice is about more than avoiding fines and attacks.
It demonstrates that you care about people and their information.
Before we go, let's visit the core's functions one more time to look at where we've been and where we're going.
The first function is identify. Our previous discussions on asset management and risk assessment relates to that function.
Coming up, we're going to focus on many of the categories of the second function, the protect function. Meet you there! 

Security guidelines in action

Organizations often face an overwhelming amount of risk. Developing a security plan from the beginning that addresses all risk can be challenging. This makes security frameworks a useful option.

Previously, you learned about the NIST Cybersecurity Framework (CSF). A major benefit of the CSF is that it's flexible and can be applied to any industry. In this reading, you’ll explore how the NIST CSF can be implemented.

The NIST CSFs five functions: identify, protect, detect, respond, and recover.

Origins of the framework

Originally released in 2014, NIST developed the Cybersecurity Framework to protect critical infrastructure in the United States. NIST was selected to develop the CSF because they are an unbiased source of scientific data and practices. NIST eventually adapted the CSF to fit the needs of businesses in the public and private sector. Their goal was to make the framework more flexible, making it easier to adopt for small businesses or anyone else that might lack the resources to develop their own security plans.

Components of the CSF

As you might recall, the framework consists of three main components: the core, tiers, and profiles. In the following sections, you'll learn more about each of these CSF components.

Core

The CSF core is a set of desired cybersecurity outcomes that help organizations customize their security plan. It consists of five functions, or parts: Identify, Protect, Detect, Respond, and Recover. These functions are commonly used as an informative reference to help organizations identify their most important assets and protect those assets with appropriate safeguards. The CSF core is also used to understand ways to detect attacks and develop response and recovery plans should an attack happen.

Tiers

The CSF tiers are a way of measuring the sophistication of an organization's cybersecurity program. CSF tiers are measured on a scale of 1 to 4. Tier 1 is the lowest score, indicating that a limited set of security controls have been implemented. Overall, CSF tiers are used to assess an organization's security posture and identify areas for improvement.

Profiles

The CSF profiles are pre-made templates of the NIST CSF that are developed by a team of industry experts. CSF profiles are tailored to address the specific risks of an organization or industry. They are used to help organizations develop a baseline for their cybersecurity plans, or as a way of comparing their current cybersecurity posture to a specific industry standard.

Note: The core, tiers, and profiles were each designed to help any business improve their security operations. Although there are only three components, the entire framework consists of a complex system of subcategories and processes.

Implementing the CSF

As you might recall, compliance is an important concept in security. Compliance is the process of adhering to internal standards and external regulations. In other words, compliance is a way of measuring how well an organization is protecting their assets. The NIST Cybersecurity Framework (CSF) is a voluntary framework that consists of standards, guidelines, and best practices to manage cybersecurity risk. Organizations may choose to use the CSF to achieve compliance with a variety of regulations.

Note: Regulations are rules that must be followed, while frameworks are resources you can choose to use.

Since its creation, many businesses have used the NIST CSF. However, CSF can be a challenge to implement due to its high level of detail. It can also be tough to find where the framework fits in. For example, some businesses have established security plans, making it unclear how CSF can benefit them. Alternatively, some businesses might be in the early stages of building their plans and need a place to start.

In any scenario, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) provides detailed guidance that any organization can use to implement the CSF. This is a quick overview and summary of their recommendations:

  • Create a current profile of the security operations and outline the specific needs of your business.

  • Perform a risk assessment to identify which of your current operations are meeting business and regulatory standards.

  • Analyze and prioritize existing gaps in security operations that place the businesses assets at risk.

  • Implement a plan of action to achieve your organization’s goals and objectives.

Pro tip: Always consider current risk, threat, and vulnerability trends when using the NIST CSF. 

You can learn more about implementing the CSF in this report by CISA that outlines how the framework was applied in the commercial facilities sector

.

Industries embracing the CSF

The NIST CSF has continued to evolve since its introduction in 2014. Its design is influenced by the standards and best practices of some of the largest companies in the world.

A benefit of the framework is that it aligns with the security practices of many organizations across the global economy. It also helps with regulatory compliance that might be shared by business partners.

Key takeaways

The NIST CSF is a flexible resource that organizations may choose to use to assess and improve their security posture. It's a useful framework that combines the security best practices of industries around the world. Implementing the CSF can be a challenge for any organization. The CSF can help business meet regulatory compliance requirements to avoid financial and reputational risks.

Wrap-up; terms and definitions from course 5, week 1

Well done! You made it to the end of this section!
Being a security practitioner takes commitment and a desire to learn.
A big part of the job involves keeping current with best practices and emerging trends.
Thinking back on my own journey into the world of security, I'm so proud of you for your continued commitment.
We've covered a lot of material this week, and this is a good time to reflect and look back
on the key concepts we explored together.

We covered the building blocks of organizational risk management: assets, threats, and vulnerabilities. 

We also spent some time demonstrating the importance of asset inventories.
It's much easier to protect company assets if you know where they are and who's responsible for them.

After that, we moved on to explore the challenges in a rapidly changing digital world.
Part of protecting data in this world is understanding if it's in use, in transit, or at rest.

Finally, in our high-level exploration of policies, standards, and procedures, we talked about
how each of them factor into achieving security goals.
There's no one-size-fits-all approach to achieving security.
While exploring the NIST Cybersecurity Framework, you gained an appreciation of how it supports good security practices.

Attackers are also constantly building their skills and finding new ways to break through the defenses we put up.
Remember, the landscape is always changing.
There's always more to learn if you want to be a good security practitioner.

Next up, we're going to expand our security mindset by learning more about the different systems security teams use
to protect organizational assets.
I'm looking forward to it! 

Glossary terms from week 1

Asset: An item perceived as having value to an organization

Asset classification: The practice of labeling assets based on sensitivity and importance to an organization

Asset inventory: A catalog of assets that need to be protected

Asset management: The process of tracking assets and the risks that affect them 

Compliance: The process of adhering to internal standards and external regulations

Data: Information that is translated, processed, or stored by a computer

Data at rest: Data not currently being accessed

Data in transit: Data traveling from one point to another

Data in use: Data being accessed by one or more users

Information security (InfoSec): The practice of keeping data in all states away from unauthorized users

National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF): A voluntary framework that consists of standards, guidelines, and best practices to manage cybersecurity risk

Policy: A set of rules that reduce risk and protect information

Procedures: Step-by-step instructions to perform a specific security task

Regulations: Rules set by a government or other authority to control the way something is done

Risk: Anything that can impact confidentiality, integrity, or availability of an asset

Standards: References that inform how to set policies

Threat: Any circumstance or event that can negatively impact assets

Vulnerability: A weakness that can be exploited by a threat


Welcome to week 2

I was fascinated by a world-wide malware event that happened in 2017.
I started watching videos and preparing to take certification tests, just like you.
I felt overwhelmed at first, but my curiosity and passion has driven me to continue learning in this field.
I always remind myself that no one is born knowing everything and everyone is on a learning journey.
Even now, I still remember what it was like to start out in this profession.
So, believe me when I tell you that you're making great progress, and I am proud of your effort!
Now, before looking ahead to where we're headed on our journey into the world of security, let's take a moment to look back on where we've been.

Previously, we focused mostly on the concept of assets and risks in security.
We covered topics like the importance of managing assets and keeping them safe.
We discussed how the digital world presents new challenges and opportunities
in the field of security.
We also spent some time exploring security plans.

With this solid foundation, we're ready to keep expanding our security mindset.
In this section, we'll cover the security controls that are used to proactively keep assets safe.
I used the word proactively there on purpose.
As you'll soon discover, these controls are the protections that we put in place to stop problems before they happen.

We're going to begin by taking an in-depth look at privacy.
Here, you'll learn about the effective data handling processes that keep information safe.
Next, you'll explore the role of encryption and hashing in safeguarding information.
Finally, you'll learn about the standard access controls that companies use to authorize and authenticate users.
Alright, are you ready to keep moving ahead? I know I am!

Security controls

These days, information is in so many places at once.
As a result, organizations are under a lot of pressure to implement effective security controls that protects everyone's information from being stolen or exposed.
Security controls are safeguards designed to reduce specific security risks.
They include a wide range of tools that protect assets before, during, and after an event.

Security controls can be organized into three types:
technical, operational, and managerial.

Technical control types include the many technologies used to protect assets.
This includes encryption, authentication systems, and others.

Operational controls relate to maintaining the day-to-day security environment.
Generally, people perform these controls like awareness training and incident response.

Managerial controls are centered around how the other two reduce risk.
Examples of management controls include policies, standards, and procedures.

Typically, organization's security policy outlines the controls needed to achieve their goals.
Information privacy plays a key role in these decisions.
Information privacy is the protection of unauthorized access and distribution of data.
Information privacy is about the right to choose.
People and organizations alike deserve the right to decide when, how, and to what extent private information about them is shared.
Security controls are the technologies used to regulate information privacy.

For example, imagine using a travel app to book a flight.
You might browse through a list of flights and find one at a good price.
To reserve a seat, you enter some personal information, like your name, email, and
credit card number for payment.
The transaction goes through successfully, and you booked your flight.
Now, you reasonably expect the airline company to access this
information you enter when signing up to complete the reservation.

However, should everyone at the company have access to your information?
A person working in the marketing department shouldn't need access
to your credit card information.
It makes sense to share that information with a customer support agent.
Except, they should only need to access it while helping with your reservation.
To maintain privacy,
security controls are intended to limit access based on the user and situation.
This is known as the principle of least privilege.

Security controls should be designed with the principle of least privilege in mind.
When they are, they rely on differentiating between data owners and
data custodians.

A data owner is a person who decides who can access, edit, use, or
destroy their information.
The idea is very straightforward except in cases where there are multiple owners.
For example, the intellectual property of an organization can have
multiple data owners.

A data custodian is anyone or anything that's responsible for
the safe handling, transport, and storage of information.
Did you notice that I mentioned, "anything?"
That's because, aside from people, organizations and
their systems are also custodians of people's information.

There are other considerations besides these when implementing security controls.
Remember that data is an asset.
Like any other asset, information privacy requires proper classification and
handling.
As we progress in this section,
we'll continue exploring other security controls that make this possible.
(Required)





Principle of least privilege

Security controls are essential to keeping sensitive data private and safe. One of the most common controls is the principle of least privilege, also referred to as PoLP or least privilege. The principle of least privilege is a security concept in which a user is only granted the minimum level of access and authorization required to complete a task or function.

Least privilege is a fundamental security control that supports the confidentiality, integrity, and availability (CIA) triad of information. In this reading, you'll learn how the principle of least privilege reduces risk, how it's commonly implemented, and why it should be routinely audited.

Limiting access reduces risk

Every business needs to plan for the risk of data theft, misuse, or abuse. Implementing the principle of least privilege can greatly reduce the risk of costly incidents like data breaches by:

  • Limiting access to sensitive information

  • Reducing the chances of accidental data modification, tampering, or loss

  • Supporting system monitoring and administration

Least privilege greatly reduces the likelihood of a successful attack by connecting specific resources to specific users and placing limits on what they can do. It's an important security control that should be applied to any asset. Clearly defining who or what your users are is usually the first step of implementing least privilege effectively.

Note: Least privilege is closely related to another fundamental security principle, the separation of duties—a security concept that divides tasks and responsibilities among different users to prevent giving a single user complete control over critical business functions. You'll learn more about separation of duties in a different reading about identity and access management.

Determining access and authorization

To implement least privilege, access and authorization must be determined first. There are two questions to ask to do so: 

  • Who is the user? 

  • How much access do they need to a specific resource? 

Determining who the user is usually straightforward. A user can refer to a person, like a customer, an employee, or a vendor. It can also refer to a device or software that's connected to your business network. In general, every user should have their own account. Accounts are typically stored and managed within an organization's directory service.

These are the most common types of user accounts:

  • Guest accounts are provided to external users who need to access an internal network, like customers, clients, contractors, or business partners.

  • User accounts are assigned to staff based on their job duties.

  • Service accounts are granted to applications or software that needs to interact with other software on the network.

  • Privileged accounts have elevated permissions or administrative access.

It's best practice to determine a baseline access level for each account type before implementing least privilege. However, the appropriate access level can change from one moment to the next. For example, a customer support representative should only have access to your information while they are helping you. Your data should then become inaccessible when the support agent starts working with another customer and they are no longer actively assisting you. Least privilege can only reduce risk if user accounts are routinely and consistently monitored.

Pro tip: Passwords play an important role when implementing the principle of least privilege. Even if user accounts are assigned appropriately, an insecure password can compromise your systems.

Auditing account privileges

Setting up the right user accounts and assigning them the appropriate privileges is a helpful first step. Periodically auditing those accounts is a key part of keeping your company’s systems secure.

There are three common approaches to auditing user accounts:

  • Usage audits

  • Privilege audits

  • Account change audits

As a security professional, you might be involved with any of these processes.

Usage audits

When conducting a usage audit, the security team will review which resources each account is accessing and what the user is doing with the resource. Usage audits can help determine whether users are acting in accordance with an organization’s security policies. They can also help identify whether a user has permissions that can be revoked because they are no longer being used.

Privilege audits

Users tend to accumulate more access privileges than they need over time, an issue known as privilege creep. This might occur if an employee receives a promotion or switches teams and their job duties change. Privilege audits assess whether a user's role is in alignment with the resources they have access to.

Account change audits

Account directory services keep records and logs associated with each user. Changes to an account are usually saved and can be used to audit the directory for suspicious activity, like multiple attempts to change an account password. Performing account change audits helps to ensure that all account changes are made by authorized users.

Note: Most directory services can be configured to alert system administrators of suspicious activity.

Key takeaways

The principle of least privilege is a security control that can reduce the risk of unauthorized access to sensitive information and resources. Setting up and configuring user accounts with the right levels of access and authorization is an important step toward implementing least privilege. Auditing user accounts and revoking unnecessary access rights is an important practice that helps to maintain the confidentiality, integrity, and availability of information.

最小権限の原則

translated with "MixerBox translate" using chatgpt4 as i suck at reading japanese still im not sure if its correct or not its just meant to help me speak and read it

セキュリティコントロールは、機密データをプライベートで安全に保つために不可欠です。最も一般的なコントロールの一つは、最小権限の原則とも呼ばれるPoLPまたは最小権限です。最小権限の原則は、ユーザーがタスクや機能を完了するために必要な最小限のアクセスと認証のみが付与されるセキュリティ概念です。

最小権限は、情報の機密性、完全性、可用性(CIA)の三位一体を支える基本的なセキュリティコントロールです。この読み物では、最小権限の原則がリスクをどのように減らし、一般的にどのように実装され、なぜ定期的に監査されるべきかを学びます。

アクセス制限によるリスクの軽減
すべてのビジネスは、データ盗難、悪用、乱用のリスクに対する計画が必要です。最小権限の原則を実装することで、データ侵害のような高額なインシデントのリスクを大幅に減らすことができます。

敏感な情報へのアクセスを制限する
データの誤変更、改ざん、または損失の可能性を減らす
システムの監視と管理を支援する
最小権限は、特定のリソースを特定のユーザーに接続し、彼らが何をできるかに制限を設けることで、成功する攻撃の可能性を大幅に減らします。これは、任意の資産に適用すべき重要なセキュリティコントロールです。ユーザーが誰であるか、または何であるかを明確に定義することは、通常、最小権限を効果的に実装する最初のステップです。

注:最小権限は、別の基本的なセキュリティ原則、つまり職務分離と密接に関連しています。これは、重要なビジネス機能に対する完全なコントロールを単一のユーザーに与えることを防ぐセキュリティ概念です。アイデンティティとアクセス管理に関する別の読み物で、職務分離について詳しく学びます。

アクセスと認証の決定
最小権限を実装するためには、まずアクセスと認証を決定する必要があります。そのためには以下の二つの質問をする必要があります。

ユーザーは誰ですか?
彼らは特定のリソースにどれだけのアクセスが必要ですか?
ユーザーが誰であるかを決定することは通常、直截的です。ユーザーは、顧客、従業員、またはベンダーのような人を指すこともあります。また、ビジネスネットワークに接続されているデバイスやソフトウェアを指すこともあります。一般的に、すべてのユーザーは自分自身のアカウントを持つべきです。アカウントは通常、組織のディレクトリサービス内で保存および管理されます。

これらは最も一般的なユーザーアカウントのタイプです:

ゲストアカウントは、顧客、クライアント、契約者、またはビジネスパートナーのような内部ネットワークにアクセスする必要がある外部ユーザーに提供されます。
ユーザーアカウントは、職務に基づいてスタッフに割り当てられます。
サービスアカウントは、ネットワーク上の他のソフトウェアと対話する必要があるアプリケーションまたはソフトウェアに付与されます。
特権アカウントは、昇格した権限または管理アクセスを持っています。
最小権限を実装する前に、各アカウントタイプのベースラインアクセスレベルを決定することがベストプラクティスです。ただし、適切なアクセスレベルは次の瞬間から変更することができます。たとえば、カスタマーサポート担当者は、あなたを支援している間だけあなたの情報にアクセスするべきです。サポートエージェントが別の顧客と作業を開始し、あなたを積極的に支援していないときに、あなたのデータはアクセス不能になるべきです。ユーザーアカウントが定期的かつ一貫して監視される場合のみ、最小権限はリスクを減らすことができます。

プロのヒント:パスワードは、最小権限の原則を実装する際に重要な役割を果たします。ユーザーアカウントが適切に割り当てられていても、安全でないパスワードはあなたのシステムを危険にさらすことができます。

アカウント権限の監査
適切なユーザーアカウントを設定し、それらに適切な権限を割り当てることは、有用な最初のステップです。定期的にこれらのアカウントを監査することは、会社のシステムを安全に保つための重要な部分です。

ユーザーアカウントの監査には、以下の3つの一般的なアプローチがあります:

使用監査
特権監査
アカウント変更監査
セキュリティ専門家として、これらのプロセスのいずれかに関与することがあります。

使用監査
使用監査を行うとき、セキュリティチームは、各アカウントがアクセスしているリソースと、ユーザーがリソースで何をしているかをレビューします。使用監査は、ユーザーが組織のセキュリティポリシーに従って行動しているかどうかを判断するのに役立ちます。また、ユーザーがもはや使用していないために取り消すことができる権限を特定するのにも役立ちます。

特権監査
ユーザーは時間とともに必要以上のアクセス権限を蓄積する傾向があり、これは特権の這い寄りとして知られています。これは、従業員が昇進したり、チームを変えたりして職務が変わる場合に発生する可能性があります。特権監査は、ユーザーの役割が彼らがアクセスしているリソースと一致しているかどうかを評価します。

アカウント変更監査
アカウントディレクトリサービスは、各ユーザーに関連する記録とログを保持します。アカウントへの変更は通常保存され、ディレクトリを監査して不審な活動、たとえばアカウントパスワードの変更を何度も試みるなどを探すために使用することができます。アカウント変更監査を行うことで、すべてのアカウント変更が認証されたユーザーによって行われていることを確認するのに役立ちます。

注:ほとんどのディレクトリサービスは、不審な活動のシステム管理者に警告するように設定することができます。

キーテイクアウェイ
最小権限の原則は、敏感な情報とリソースへの不正アクセスのリスクを減らすことができるセキュリティコントロールです。適切なレベルのアクセスと認証を持つユーザーアカウントを設定し、設定することは、最小権限を実装するための重要なステップです。ユーザーアカウントを監査し、不必要なアクセス権を取り消すことは、情報の機密性、完全性、および可用性を維持するための重要な実践です。

The data lifecycle

Organizations of all sizes handle a large amount of data that must be kept private. You learned that data can be vulnerable whether it is at rest, in use, or in transit. Regardless of the state it is in, information should be kept private by limiting access and authorization.

In security, data vulnerabilities are often mapped in a model known as the data lifecycle. Each stage of the data lifecycle plays an important role in the security controls that are put in place to maintain the CIA triad of information. In this reading, you will learn about the data lifecycle, the plans that determine how data is protected, and the specific types of data that require extra attention.

The data lifecycle

The data lifecycle is an important model that security teams consider when protecting information. It influences how they set policies that align with business objectives. It also plays an important role in the technologies security teams use to make information accessible.

In general, the data lifecycle has five stages. Each describe how data flows through an organization from the moment it is created until it is no longer useful:

Protecting information at each stage of this process describes the need to keep it accessible and recoverable should something go wrong.

Data governance

Businesses handle massive amounts of data every day. New information is constantly being collected from internal and external sources. A structured approach to managing all of this data is the best way to keep it private and secure.

Data governance is a set of processes that define how an organization manages information. Governance often includes policies that specify how to keep data private, accurate, available, and secure throughout its lifecycle.

Effective data governance is a collaborative activity that relies on people. Data governance policies commonly categorize individuals into a specific role:

Businesses store, move, and transform data using a wide range of IT systems. Data governance policies often assign accountability to data owners, custodians, and stewards.

Note: As a data custodian, you will primarily be  responsible for maintaining security and privacy rules for your organization.

Protecting data at every stage

Most security plans include a specific policy that outlines how information will be managed across an organization. This is known as a data governance policy. These documents clearly define procedures that should be followed to participate in keeping data safe. They place limits on who or what can access data. Security professionals are important participants in data governance. As a data custodian, you will be responsible for ensuring that data isn’t damaged, stolen, or misused.

Legally protected information

Data is more than just a bunch of 1s and 0s being processed by a computer. Data can represent someone's personal thoughts, actions, and choices. It can represent a purchase, a sensitive medical decision, and everything in between. For this reason, data owners should be the ones deciding whether or not to share their data. As a security professional, protecting a person's data privacy decisions must always be respected.

Securing data can be challenging. In large part, that's because data owners generate more data than they can manage. As a result, data custodians and stewards sometimes lack direct, explicit instructions on how they should handle specific types of data. Governments and other regulatory agencies have bridged this gap by creating rules that specify the types of information that organizations must protect by default:

Overall, it's important to protect all types of personal information from unauthorized use and disclosure.

Key takeaways

Keeping information private has never been so important. Many organizations have data governance policies that outline how they plan to protect sensitive information. As a data custodian, you will play a key role in keeping information accessible and safe throughout its lifecycle. There are various types of information and controls that you’ll encounter in the field. As you continue through this course, you’ll learn more about major security controls that keep data private.

Information privacy: Regulations and compliance

Security and privacy have a close relationship. As you may recall, people have the right to control how their personal data is collected and used. Organizations also have a responsibility to protect the information they are collecting from being compromised or misused. As a security professional, you will be highly involved in these efforts.

Previously, you learned how regulations and compliance reduce security risk. To review, refer to the reading about how security controls, frameworks, and compliance regulations

are used together to manage security and minimize risk. In this reading, you will learn how information privacy regulations affect data handling practices. You'll also learn about some of the most influential security regulations in the world. 

eye-key-venn.png

Information security vs. information privacy

Security and privacy are two terms that often get used interchangeably outside of this field. Although the two concepts are connected, they represent specific functions:

  • Information privacy refers to the protection of unauthorized access and distribution of data.

  • Information security (InfoSec) refers to the practice of keeping data in all states away from unauthorized users.

The key difference: Privacy is about providing people with control over their personal information and how it's shared. Security is about protecting people’s choices and keeping their information safe from potential threats.

For example, a retail company might want to collect specific kinds of personal information about its customers for marketing purposes, like their age, gender, and location. How this private information will be used should be disclosed to customers before it's collected. In addition, customers should be given an option to opt-out if they decide not to share their data.

Once the company obtains consent to collect personal information, it might implement specific security controls in place to protect that private data from unauthorized access, use, or disclosure. The company should also have security controls in place to respect the privacy of all stakeholders and anyone who chose to opt-out.

Note: Privacy and security are both essential for maintaining customer trust and brand reputation.

Why privacy matters in security

Data privacy and protection are topics that started gaining a lot of attention in the late 1990s. At that time, tech companies suddenly went from processing people’s data to storing and using it for business purposes. For example, if a user searched for a product online, companies began storing and sharing access to information about that user’s search history with other companies. Businesses were then able to deliver personalized shopping experiences to the user for free.

Eventually this practice led to a global conversation about whether these organizations had the right to collect and share someone’s private data. Additionally, the issue of data security became a greater concern; the more organizations collected data, the more vulnerable it was to being abused, misused, or stolen.

Many organizations became more concerned about the issues of data privacy. Businesses became more transparent about how they were collecting, storing, and using information. They also began implementing more security measures to protect people's data privacy. However, without clear rules in place, protections were inconsistently applied.

Note: The more data is collected, stored, and used, the more vulnerable it is to breaches and threats.

Notable privacy regulations

Businesses are required to abide by certain laws to operate. As you might recall, regulations are rules set by a government or another authority to control the way something is done. Privacy regulations in particular exist to protect a user from having their information collected, used, or shared without their consent. Regulations may also describe the security measures that need to be in place to keep private information away from threats.

Three of the most influential industry regulations that every security professional should know about are:

  • General Data Protection Regulation (GDPR)

  • Payment Card Industry Data Security Standard (PCI DSS)

  • Health Insurance Portability and Accountability Act (HIPAA)

GDPR

GDPR is a set of rules and regulations developed by the European Union (EU) that puts data owners in total control of their personal information. Under GDPR, types of personal information include a person's name, address, phone number, financial information, and medical information.

The GDPR applies to any business that handles the data of EU citizens or residents, regardless of where that business operates. For example, a US based company that handles the data of EU visitors to their website is subject to the GDPRs provisions.

PCI DSS

PCI DSS is a set of security standards formed by major organizations in the financial industry. This regulation aims to secure credit and debit card transactions against data theft and fraud.

HIPAA

HIPAA is a U.S. law that requires the protection of sensitive patient health information. HIPAA prohibits the disclosure of a person's medical information without their knowledge and consent.

Note: These regulations influence data handling at many organizations around the world even though they were developed by specific nations.

Several other security and privacy compliance laws exist. Which ones your organization needs to follow will depend on the industry and the area of authority. Regardless of the circumstances, regulatory compliance is important to every business.

Security assessments and audits

Businesses should comply with important regulations in their industry. Doing so validates that they have met a minimum level of security while also demonstrating their dedication to maintaining data privacy.

Meeting compliance standards is usually a continual, two-part process of security audits and assessments:

  • A security audit is a review of an organization's security controls, policies, and procedures against a set of expectations.

  • A security assessment is a check to determine how resilient current security implementations are against threats.

For example, if a regulation states that multi-factor authentication (MFA) must be enabled for all administrator accounts, an audit might be conducted to check those user accounts for compliance. After the audit, the internal team might perform a security assessment that determines many users are using weak passwords. Based on their assessment, the team could decide to enable MFA on all user accounts to improve their overall security posture.

Note: Compliance with legal regulations, such as GDPR, can be determined during audits.

As a security analyst, you are likely to be involved with security audits and assessments in the field. Businesses usually perform security audits less frequently, approximately once per year. Security audits may be performed both internally and externally by different third-party groups.

In contrast, security assessments are usually performed more frequently, about every three-to-six months. Security assessments are typically performed by internal employees, often as preparation for a security audit. Both evaluations are incredibly important ways to ensure that your systems are effectively protecting everyone's privacy.

Key takeaways

A growing number of businesses are making it a priority to protect and govern the use of sensitive data to maintain customer trust. Security professionals should think about data and the need for privacy in these terms. Organizations commonly use security assessments and audits to evaluate gaps in their security plans. While it is possible to overlook or delay addressing the results of an assessment, doing so can have serious business consequences, such as fines or data breaches.

Heather: The importance of protecting PII

Hello, my name is Heather and I'm the Vice President of Security Engineering at Google.
PII is everywhere.
It's a fundamental part of how we are all working online all the time.
If you are using online resources, you are probably putting your PII out there somewhere.
There's some of your PII that lots of people know, such as your name.
And then there's sensitive data that you don't want very many people to know, such as your bank account number or your private medical health information.
And so we make these distinctions often because this kind of information needs to be handled differently.
Everything that we do now, from school to voting, to registering our car happens online.
And because of that, it's so important that we have safety built-in by default into all of our systems. Here's some tips.
You should always encrypt the data as much as you can when it's being stored at rest.
And secondly, when it's transitting over the Internet, we always want to encrypt it using TLS or SSL.
Third, within your company, you should think very clearly about who has access to that data.
It should be almost no one if it's very sensitive.
And in the rare cases where somebody does need to access that data, there should be a record of that access, who accessed it, and a justification as to why.
And you should have a program to look at the audit records for that data.
The most important thing to remember is if you have a situation where PII has been compromised, remember that's someone's personal information and your response wants to be grounded in that reality.
They need to be able to trust the infrastructure, the systems, the websites, the devices.
They need to be able to trust the experience they're having.
For me, that's the mission: To help keep billions of people safe online every day.

Fundamentals of cryptography

The internet is an open, public system with a lot of data flowing through it.
Even though we all send and store information online, there's some information that we choose to keep private.
In security, this type of data is known as personally identifiable information.
Personally identifiable information, or PII, is any information that can be used to infer an individual's identity.
This can include things like someone's name, medical and financial information, photos, emails, or fingerprints.

Maintaining the privacy of PII online is difficult.
It takes the right security controls to do so.
One of the main security controls used to protect information online is cryptography.
Cryptography is the process of transforming information into a form that unintended readers can't understand.
Data of any kind is kept secret using a two-step process:
encryption to hide the information, and decryption to unhide it.

Imagine sending an email to a friend.
The process starts by taking data in its original and readable form, known as plaintext.
Encryption takes that information and scrambles it into an unreadable form, known as ciphertext.
We then use decryption to unscramble the ciphertext back into plaintext form, making it readable again.

Hiding and unhiding private information is a practice that's been around for a long time.
Way before computers!
One of the earliest cryptographic methods is known as Caesar's cipher.
This method is named after a Roman general, Julius Caesar, who ruled the Roman empire near the end of the first century BC.
He used it to keep messages between him and his military generals private.

Caesar's cipher is a pretty simple algorithm that works by shifting letters in the Roman alphabet forward by a fixed number of spaces.
An algorithm is a set of rules that solve a problem.
Specifically in cryptography, a cipher is an algorithm that encrypts information.

For example, a message encoded with Caesar's cipher using a shift of 3 would encode an A as a D, a B as an E, a C as an F, and so on.
In this example, you could send a friend a message that said, "hello" using a shift of 3, and it would read "khoor."
Now, you might be wondering how would you know the shift a message encrypted with Caesar's cipher is using. The answer to that is, you need the key!

A cryptographic key is a mechanism that decrypts ciphertext.
In our example, the key would tell you that my message is encrypted by 3 shifts.
With that information, you can unlock the hidden message!

Every form of encryption relies on both a cipher and key to secure the exchange of information.
Caesar's cipher is not widely used today because of a couple of major flaws.
One concerns the cipher itself. The other relates to the key.
This particular cipher relies entirely on the characters of the Roman alphabet to hide information.
For example, consider a message written using the English alphabet, which is only 26 characters.
Even without the key, it's pretty simple to crack a message secured with Caesar's cipher by shifting letters 26 different ways.

In information security, this tactic is known as brute force attack, a trial-and-error process of discovering private information.

The other major flaw of Caesar's cipher is that it relies on a single key.
If that key was lost or stolen, there's nothing stopping someone from accessing private information.
Properly keeping track of cryptographic keys is an important part of security.
To start, it's important to ensure that these keys are not stored in public places, and to share them separately from the information they will decrypt.

Caesar's cipher is just one of many algorithms used to protect people's privacy.
Due to its limitations, we rely on more complex algorithms to secure information online.
Our next focus is exploring how modern algorithms work to keep information private.

Public key infrastructure PKI

Computers use a lot of encryption algorithms to send and store information online.
They're all helpful when it comes to hiding private information, but only as long as their keys are protected.
Can you imagine having to keep track of the encryption keys protecting all of your personal information online? Neither can I, and we don't have to, thanks to something known as public key infrastructure.

Public key infrastructure, or PKI, is an encryption framework that secures the exchange of information online.
It's a broad system that makes accessing information fast, easy, and secure.
So, how does it all work?

PKI is a two-step process.
It all starts with the exchange of encrypted information.
This involves either asymmetric encryption, symmetric encryption, or both.

Asymmetric encryption involves the use of a public and private key pair for encryption and decryption of data.
Let's imagine this as a box that can be opened with two keys. 
One key, the public key, can only be used to access the slot and add items to the box.
Since the public key can't be used to remove items, it can be copied and shared with people all around the world to add items.
On the other hand, the second key, the private key, opens the box fully, so that the items inside can be removed.
Only the owner of the box has access to the private key that unlocks it.

Using a public key allows the people and servers you're communicating with to see and send you encrypted information that only you can decrypt with your private key.
This two-key system makes asymmetric encryption a secure way to exchange information online;
however, it also slows down the process.

Symmetric encryption, on the other hand, is a faster and simpler approach to key management.
Symmetric encryption involves the use of a single secret key to exchange information.

Let's imagine the locked box again.
Instead of two keys, symmetric encryption uses the same key.
The owner can use it to open the box, add items, and close it again. When they want to share access, they can give the secret key to anyone else to do the same.
Exchanging a single secret key may make web communications faster, but it also makes it less secure.

PKI uses both asymmetric and symmetric encryption, sometimes in conjunction with one another.
It all depends on whether speed or security is the priority.
For example, mobile chat applications use asymmetric encryption to establish a connection between people at
the start of a conversation when security is the priority.
Afterwards, when the speed of communications back-and-forth is the priority, symmetric encryption takes over.

While both have their own strengths and weaknesses, they share a common vulnerability, establishing trust between the sender and receiver.
Both processes rely on sharing keys that can be misused, lost, or stolen.
This isn't a problem when we exchange information in person because we can use our senses to tell the difference between
those we trust and those we don't trust.
Computers, on the other hand, aren't naturally equipped to make this distinction.
That's where the second step of PKI applies.
PKI addresses the vulnerability of key sharing by establishing trust using a system of
digital certificates between computers and networks.

A digital certificate is a file that verifies the identity of a public key holder.
Most online information is exchanged using digital certificates.
Users, companies, and networks hold one and exchange them when communicating information online
as a way of signaling trust.
Let's look at an example of how digital certificates are created.

Let's say an online business is about to launch their website, and they want to
obtain a digital certificate.
When they register their domain, the hosting company sends certain information over to a trusted certificate authority, or CA.
The information provided is usually basic things like the company name and the country where its headquarters are located.
A public key for the site is also provided.
The certificate authority then uses this data to verify the company's identity.
When it's confirmed, the CA encrypts the data with its own private key.
Finally, they create a digital certificate that contains the encrypted company data.
It also contains CA's digital signature to prove that it's authentic.

Digital certificates are a lot like a digital ID badge that's used online to restrict or grant access to information.
This is how PKI solves the trust issue.
Combined with asymmetric and symmetric encryption, this two-step approach to exchanging secure information between trusted sources is what makes PKI such a useful security control.

Symmetric and asymmetric encryption

Previously, you learned these terms: 

All digital information deserves to be kept private, safe, and secure. Encryption is one key to doing that! It is useful for transforming information into a form that unintended recipients cannot understand. In this reading, you’ll compare symmetric and asymmetric encryption and learn about some well-known algorithms for each.

Types of encryption

There are two main types of encryption:

The importance of key length

Ciphers are vulnerable to brute force attacks, which use a trial and error process to discover private information. This tactic is the digital equivalent of trying every number in a combination lock trying to find the right one. In modern encryption, longer key lengths are considered to be more secure. Longer key lengths mean more possibilities that an attacker needs to try to unlock a cipher.

One drawback to having long encryption keys is slower processing times. Although short key lengths are generally less secure, they’re much faster to compute. Providing fast data communication online while keeping information safe is a delicate balancing act. 

Approved algorithms

Many web applications use a combination of symmetric and asymmetric encryption. This is how they balance user experience with safeguarding information. As an analyst, you should be aware of the most widely-used algorithms.

Symmetric algorithms

Asymmetric algorithms

Generating keys

These algorithms must be implemented when an organization chooses one to protect their data. One way this is done is using OpenSSL, which is an open-source command line tool that can be used to generate public and private keys. OpenSSL is commonly used by computers to verify digital certificates that are exchanged as part of public key infrastructure.

Note: OpenSSL is just one option. There are various others available that can generate keys with any of these common algorithms. 

In early 2014, OpenSSL disclosed a vulnerability, known as the Heartbleed bug

, that exposed sensitive data in the memory of websites and applications. Although unpatched versions of OpenSSL are still available, the Heartbleed bug was patched later that year (2014). Many businesses today use the secure versions of OpenSSL to generate public and private keys, demonstrating the importance of using up-to-date software.

Obscurity is not security

In the world of cryptography, a cipher must be proven to be unbreakable before claiming that it is secure. According to Kerchoff’s principle

, cryptography should be designed in such a way that all the details of an algorithm—except for the private key—should be knowable without sacrificing its security. For example, you can access all the details about how AES encryption works online and yet it is still unbreakable.

Occasionally, organizations implement their own, custom encryption algorithms. There have been instances where those secret cryptographic systems have been quickly cracked after being made public.

Pro tip: A cryptographic system should not be considered secure if it requires secrecy around how it works.

Encryption is everywhere

Companies use both symmetric and asymmetric encryption. They often work as a team, balancing security with user experience.

For example, websites tend to use asymmetric encryption to secure small blocks of data that are important. Usernames and passwords are often secured with asymmetric encryption while processing login requests. Once a user gains access, the rest of their web session often switches to using symmetric encryption for its speed.

Using data encryption like this is increasingly required by law. Regulations like the Federal Information Processing Standards (FIPS 140-3) and the General Data Protection Regulation (GDPR) outline how data should be collected, used, and handled. Achieving compliance with either regulation is critical to demonstrating to business partners and governments that customer data is handled responsibly.

Key takeaways

Knowing the basics of encryption is important for all security professionals. Symmetric encryption relies on a single secret key to protect data. On the other hand, asymmetric uses a public and private key pair. Their encryption algorithms create different key sizes. Both types of encryption are used to meet compliance regulations and protect data online.

Symmetric and asymmetric encryption

Previously, you learned these terms: 

  • Encryption: the process of converting data from a readable format to an encoded format

  • Public key infrastructure (PKI):  an encryption framework that secures the exchange of online information

  • Cipher: an algorithm that encrypts information

All digital information deserves to be kept private, safe, and secure. Encryption is one key to doing that! It is useful for transforming information into a form that unintended recipients cannot understand. In this reading, you’ll compare symmetric and asymmetric encryption and learn about some well-known algorithms for each.

Types of encryption

There are two main types of encryption:

  • Symmetric encryption is the use of a single secret key to exchange information. Because it uses one key for encryption and decryption, the sender and receiver must know the secret key to lock or unlock the cipher.

  • Asymmetric encryption is the use of a public and private key pair for encryption and decryption of data. It uses two separate keys: a public key and a private key. The public key is used to encrypt data, and the private key decrypts it. The private key is only given to users with authorized access.

The importance of key length

Ciphers are vulnerable to brute force attacks, which use a trial and error process to discover private information. This tactic is the digital equivalent of trying every number in a combination lock trying to find the right one. In modern encryption, longer key lengths are considered to be more secure. Longer key lengths mean more possibilities that an attacker needs to try to unlock a cipher.

One drawback to having long encryption keys is slower processing times. Although short key lengths are generally less secure, they’re much faster to compute. Providing fast data communication online while keeping information safe is a delicate balancing act. 

Approved algorithms

Many web applications use a combination of symmetric and asymmetric encryption. This is how they balance user experience with safeguarding information. As an analyst, you should be aware of the most widely-used algorithms.

Symmetric algorithms

  • Triple DES (3DES) is known as a block cipher because of the way it converts plaintext into ciphertext in “blocks.” Its origins trace back to the Data Encryption Standard (DES), which was developed in the early 1970s. DES was one of the earliest symmetric encryption algorithms that generated 64-bit keys. A bit is the smallest unit of data measurement on a computer. As you might imagine, Triple DES generates keys that are 192 bits, or three times as long. Despite the longer keys, many organizations are moving away from using Triple DES due to limitations on the amount of data that can be encrypted. However, Triple DES is likely to remain in use for backwards compatibility purposes.   

  • Advanced Encryption Standard (AES) is one of the most secure symmetric algorithms today. AES generates keys that are 128, 192, or 256 bits. Cryptographic keys of this size are considered to be safe from brute force attacks. It’s estimated that brute forcing an AES 128-bit key could take a modern computer billions of years!

Asymmetric algorithms

  • Rivest Shamir Adleman (RSA) is named after its three creators who developed it while at the Massachusetts Institute of Technology (MIT). RSA is one of the first asymmetric encryption algorithms that produces a public and private key pair. Asymmetric algorithms like RSA produce even longer key lengths. In part, this is due to the fact that these functions are creating two keys. RSA key sizes are 1,024, 2,048, or 4,096 bits. RSA is mainly used to protect highly sensitive data.

  • Digital Signature Algorithm (DSA) is a standard asymmetric algorithm that was introduced by NIST in the early 1990s. DSA also generates key lengths of 2,048 bits. This algorithm is widely used today as a complement to RSA in public key infrastructure.

Generating keys

These algorithms must be implemented when an organization chooses one to protect their data. One way this is done is using OpenSSL, which is an open-source command line tool that can be used to generate public and private keys. OpenSSL is commonly used by computers to verify digital certificates that are exchanged as part of public key infrastructure.

Note: OpenSSL is just one option. There are various others available that can generate keys with any of these common algorithms. 

In early 2014, OpenSSL disclosed a vulnerability, known as the Heartbleed bug

, that exposed sensitive data in the memory of websites and applications. Although unpatched versions of OpenSSL are still available, the Heartbleed bug was patched later that year (2014). Many businesses today use the secure versions of OpenSSL to generate public and private keys, demonstrating the importance of using up-to-date software.

Obscurity is not security

In the world of cryptography, a cipher must be proven to be unbreakable before claiming that it is secure. According to Kerchoff’s principle

, cryptography should be designed in such a way that all the details of an algorithm—except for the private key—should be knowable without sacrificing its security. For example, you can access all the details about how AES encryption works online and yet it is still unbreakable.

Occasionally, organizations implement their own, custom encryption algorithms. There have been instances where those secret cryptographic systems have been quickly cracked after being made public.

Pro tip: A cryptographic system should not be considered secure if it requires secrecy around how it works.

Encryption is everywhere

Companies use both symmetric and asymmetric encryption. They often work as a team, balancing security with user experience.

For example, websites tend to use asymmetric encryption to secure small blocks of data that are important. Usernames and passwords are often secured with asymmetric encryption while processing login requests. Once a user gains access, the rest of their web session often switches to using symmetric encryption for its speed.

Using data encryption like this is increasingly required by law. Regulations like the Federal Information Processing Standards (FIPS 140-3) and the General Data Protection Regulation (GDPR) outline how data should be collected, used, and handled. Achieving compliance with either regulation is critical to demonstrating to business partners and governments that customer data is handled responsibly.

Key takeaways

Knowing the basics of encryption is important for all security professionals. Symmetric encryption relies on a single secret key to protect data. On the other hand, asymmetric uses a public and private key pair. Their encryption algorithms create different key sizes. Both types of encryption are used to meet compliance regulations and protect data online.

Non-repudiation and hashing

We've spent some time together exploring a couple forms of encryption.
The two types we've discussed produce keys that are shared when communicating information.
Encryption keys are vulnerable to being lost or stolen, which can lead to sensitive information at risk.
Let's explore another security control that helps companies address this weakness.

A hash function is an algorithm that produces a code that can't be decrypted.
Unlike asymmetric and symmetric algorithms, hash functions are one-way processes that do not generate decryption keys.
Instead, these algorithms produce a unique identifier known as a hash value, or digest.
Here's an example to demonstrate this.

Imagine a company has an internal application that is used by employees and is stored in a shared drive.
After passing through a hashing function, the program receives its hash value.
For example purposes, we created this relatively short hash value with the MD5 hashing function.
Generally, standard hash functions that produce longer hashes are preferred for being more secure.

Next, let's imagine an attacker replaces the program with a modified version that performs malicious actions.
The malicious program may work like the original.
However, if so much as one line of code is different from the original, it will produce a different hash value.
By comparing the hash values, we can validate that the programs are different.
Attackers use tricks like this often because they're easily overlooked.
Fortunately, hash values help us identify when something like this is happening.

In security, hashes are primarily used as a way to determine the integrity of files and applications.

Data integrity relates to the accuracy and consistency of information.
This is known as non-repudiation, the concept that authenticity of information can't be denied.

Hash functions are important security controls that make proven data integrity possible. Analysts use them frequently.
One way to do this is by finding the hash value of files or applications and comparing them against known malicious files.

For example, we can use the Linux command line to generate the hash value for any file on your computer.
We just launch a shell and type the name of the hashing algorithm we want to use.
In this case, we're using a common one known as sha256.
Next, we need to enter the file name of any file we want to hash.
Let's hash the contents of newfile.txt.
Now, we'll press Enter.
The terminal generates this unique hash value for the file.

These tools can be compared with the hash values of known online viruses.
One such database is VirusTotal.
This is a popular tool among security practitioners that's useful for analyzing suspicious files, domains, IPs, and URLs.

As we've explored, even the slightest change in input results in a totally different hash value.
Hash functions are intentionally designed this way to assist with matters of non-repudiation.
They equip computers with a quick and easy way to compare input and output values and validate data integrity.
Pretty cool, right?

The evolution of hash functions

Hash functions are important controls that are part of every company's security strategy. Hashing is widely used for authentication and non-repudiation, the concept that the authenticity of information can’t be denied.

Previously, you learned that hash functions are algorithms that produce a code that can't be decrypted. Hash functions convert information into a unique value that can then be used to determine its integrity. In this reading, you’ll learn about the origins of hash functions and how they’ve changed over time.

The hashing algorithm process. A plaintext document is converted by a hash function into hashed text.

Origins of hashing

Hash functions have been around since the early days of computing. They were originally created as a way to quickly search for data. Since the beginning, these algorithms have been designed to represent data of any size as small, fixed-size values, or digests. Using a hash table, which is a data structure that's used to store and reference hash values, these small values became a more secure and efficient way for computers to reference data.

One of the earliest hash functions is Message Digest 5, more commonly known as MD5. Professor Ronald Rivest of the Massachusetts Institute of Technology (MIT) developed MD5 in the early 1990s as a way to verify that a file sent over a network matched its source file.

Whether it’s used to convert a single email or the source code of an application, MD5 works by converting data into a 128-bit value. You might recall that a bit is the smallest unit of data measurement on a computer. Bits can either be a 0 or 1. In a computer, bits represent user input in a way that computers can interpret. In a hash table, this appears as a string of 32 characters. Altering anything in the source file generates an entirely new hash value.

Generally, the longer the hash value, the more secure it is. It wasn’t long after MD5's creation that security practitioners discovered 128-bit digests resulted in a major vulnerability.

Here is an example of how plaintext gets turned into hash values:

Names being turned into hash values. The hash values are stored in random rows of a data table.

Hash collisions

One of the flaws in MD5 happens to be a characteristic of all hash functions. Hash algorithms map any input, regardless of its length, into a fixed-size value of letters and numbers. What’s the problem with that? Although there are an infinite amount of possible inputs, there’s only a finite set of available outputs!

MD5 values are limited to 32 characters in length. Due to the limited output size, the algorithm is considered to be vulnerable to hash collision, an instance when different inputs produce the same hash value. Because hashes are used for authentication, a hash collision is similar to copying someone’s identity. Attackers can carry out collision attacks to fraudulently impersonate authentic data.

Next-generation hashing

To avoid the risk of hash collisions, functions that generated longer values were needed. MD5's shortcomings gave way to a new group of functions known as the Secure Hashing Algorithms, or SHAs.

The National Institute of Standards and Technology (NIST) approves each of these algorithms. Numbers besides each SHA function indicate the size of its hash value in bits. Except for SHA-1, which produces a 160-bit digest, these algorithms are considered to be collision-resistant. However, that doesn’t make them invulnerable to other exploits.

Five functions make up the SHA family of algorithms:

  • SHA-1

  • SHA-224

  • SHA-256

  • SHA-384

  • SHA-512

Secure password storage

Passwords are typically stored in a database where they are mapped to a username. The server receives a request for authentication that contains the credentials supplied by the user. It then looks up the username in the database and compares it with the password that was provided and verifies that it matches before granting them access.

This is a safe system unless an attacker gains access to the user database. If passwords are stored in plaintext, then an attacker can steal that information and use it to access company resources. Hashing adds an additional layer of security. Because hash values can't be reversed, an attacker would not be able to steal someone's login credentials if they managed to gain access to the database.

Rainbow tables

A rainbow table is a file of pre-generated hash values and their associated plaintext. They’re like dictionaries of weak passwords. Attackers capable of obtaining an organization’s password database can use a rainbow table to compare them against all possible values.

Adding some “salt”

Functions with larger digests are less vulnerable to collision and rainbow table attacks. But as you’re learning, no security control is perfect.

Salting is an additional safeguard that's used to strengthen hash functions. A salt is a random string of characters that's added to data before it's hashed. The additional characters produce a more unique hash value, making salted data resilient to rainbow table attacks.

For example, a database containing passwords might have several hashed entries for the password "password." If those passwords were all salted, each entry would be completely different. That means an attacker using a rainbow table would be unable to find matching values for "password" in the database.

User input entering a hash function. A random set of characters are added to the hashing process.

For this reason, salting has become increasingly common when storing passwords and other types of sensitive data. The length and uniqueness of a salt is important. Similar to hash values, the longer and more complex a salt is, the harder it is to crack.

Key takeaways

Security professionals often use hashing as a tool to validate the integrity of program files, documents, and other types of data. Another way it’s used is to reduce the chances of a data breach. As you’ve learned, not all hashing functions provide the same level of protection. Rainbow table attacks are more likely to work against algorithms that generate shorter keys, like MD5. Many small- and medium-sized businesses still rely on MD5 to secure sensitive data. Knowing about alternative algorithms and salting better prepares you to make impactful security recommendations.

Access controls and authentication systems

Protecting data is a fundamental feature of security controls.
When it comes to keeping information safe and secure, hashing and encryption are powerful, yet limited tools.
Managing who or what has access to information is also key to safeguarding information.

The next series of controls that we'll be exploring are access controls, the security controls that manage access, authorization, and accountability of information.
When done well, access controls maintain data confidentiality, integrity, and availability.
They also get users the information they need quickly.

These systems are commonly broken down into three separate, yet related functions known as the authentication, authorization, and accounting framework.
Each control has its own protocol and systems that make them work.
In this video, let's get comfortable with the basics of the first one on the list, authentication.

Authentication systems are access controls that serve a very basic purpose.
They ask anything attempting to access information this simple question: who are you?
Organizations go about collecting answers to these questions differently, depending on the objectives of their security policy.
Some are more thorough than others, but in general, responses to this question can be based on three factors of authentication.

The first is knowledge. Authentication by knowledge refers to something the user knows, like a password or the answer to a security question they provided previously.

Another factor is ownership, referring to something the user possesses.
A commonly used type of authentication by ownership is a one-time passcode, or OTP.
You've probably experienced these at one time or another.
They're a random number sequence that an application or website will send you via text or email and ask you to provide.

Last is characteristic. Authentication by this factor is something the user is.
Biometrics, like fingerprint scans on your smartphone, are example of this type of authentication.
While not used everywhere, this form of authentication is becoming more common because it's much tougher for criminals to impersonate someone if they have to mimic a fingerprint or facial scan as opposed to a password.

The information provided during authentication needs to match the information on file for these access controls to work.
When the credentials don't match, authentication fails and access is denied.
When they match, access is granted. 
Incorrectly denying access can be frustrating to anyone.
To make access systems more convenient, many organizations these days rely on single sign-on.
Single sign-on, or SSO, is a technology that combines several different logins into one.
Can you imagine having to reintroduce yourself every time you meet up with a friend?
That's exactly the sort of problem SSO solves.

Instead of requiring users to authenticate over and over again, SSO establishes their identity once, allowing them to gain access to company resources faster.
While SSO systems are helpful when it comes to speeding up the authentication process, they present a significant vulnerability when used alone.

Denying access to authorized users can be frustrating, but you know what's even worse? Incorrectly granting access to the wrong user.
SSO technology is great, but not if it relies on just a single factor of authentication. Adding more authentication factors strengthen these systems.

Multi-factor authentication, or MFA, is a security measure, which requires a user to verify their identity in two or more ways to access a system or network.
MFA combines two or more independent credentials, like knowledge and ownership, to prove that someone is who they claim to be.

SSO and MFA are often used in conjunction with one another to layer the defense capabilities of authentication systems.
When both are used, organizations can ensure convenient access that is also secure.
Now that we covered authentication, we're ready to explore the second part of the framework.
Next, we'll learn about authorization!

The rise of SSO and MFA

Most companies help keep their data safely locked up behind authentication systems. Usernames and passwords are the keys that unlock information for most organizations. But are those credentials enough? Information security often focuses on managing a user's access of, and authorization to, information.

Previously, you learned about the three factors of authentication: knowledge, ownership, and characteristic. Single sign-on (SSO) and multi-factor authentication (MFA) are two technologies that have become popular for implementing these authentication factors. In this reading, you’ll learn how these technologies work and why companies are adopting them.

A better approach to authentication

Single sign-on (SSO) is a technology that combines several different logins into one. More companies are turning to SSO as a solution to their authentication needs for three reasons:

  1. SSO improves the user experience by eliminating the number of usernames and passwords people have to remember.

  2. Companies can lower costs by streamlining how they manage connected services.

  3. SSO improves overall security by reducing the number of access points attackers can target.

This technology became available in the mid-1990s as a way to combat password fatigue, which refers to people’s tendency to reuse passwords across services. Remembering many different passwords can be a challenge, but using the same password repeatedly is a major security risk. SSO solves this dilemma by shifting the burden of authentication away from the user.

How SSO works

SSO works by automating how trust is established between a user and a service provider. Rather than placing the responsibility on an employee or customer, SSO solutions use trusted third-parties to prove that a user is who they claim to be. This is done through the exchange of encrypted access tokens between the identity provider and the service provider.

Similar to other kinds of digital information, these access tokens are exchanged using specific protocols. SSO implementations commonly rely on two different authentication protocols: LDAP and SAML. LDAP, which stands for Lightweight Directory Access Protocol, is mostly used to transmit information on-premises; SAML, which stands for Security Assertion Markup Language, is mostly used to transmit information off-premises, like in the cloud.

Note: LDAP and SAML protocols are often used together.

Here's an example of how SSO can connect a user to multiple applications with one access token:

One user connects to multiple applications with one access token.

Limitations of SSO

Usernames and passwords alone are not always the most secure way of protecting sensitive information. SSO provides useful benefits, but there’s still the risk associated with using one form of authentication. For example, a lost or stolen password could expose information across multiple services. Thankfully, there’s a solution to this problem.

MFA to the rescue

Multi-factor authentication (MFA) requires a user to verify their identity in two or more ways to access a system or network. In a sense, MFA is similar to using an ATM to withdraw money from your bank account. First, you insert a debit card into the machine as one form of identification. Then, you enter your PIN number as a second form of identification. Combined, both steps, or factors, are used to verify your identity before authorizing you to access the account.

An equation showing user login plus biometric or physical devices equal access.

Strengthening authentication

MFA builds on the benefits of SSO. It works by having users prove that they are who they claim to be. The user must provide two factors (2FA) or three factors (3FA) to authenticate their identification. The MFA process asks users to provide these proofs, such as:

  • Something a user knows: most commonly a username and password

  • Something a user has: normally received from a service provider, like a one-time passcode (OTP) sent via SMS

  • Something a user is: refers to physical characteristics of a user, like their fingerprints or facial scans

Requiring multiple forms of identification is an effective security measure, especially in cloud environments. It can be difficult for businesses in the cloud to ensure that the users remotely accessing their systems are not threat actors. MFA can reduce the risk of authenticating the wrong users by requiring forms of identification that are difficult to imitate or brute force.

Key takeaways

Implementing both SSO and MFA security controls improves security without sacrificing the user experience. Relying on passwords alone is a serious vulnerability. Implementing SSO means fewer points of entry, but that’s not enough. Combining SSO and MFA can be an effective way to protect information, so that users have a streamlined experience while unauthorized people are kept away from important information.

The mechanisms of authorization

Access is as much about authorization as it is about authentication.
One of the most important functions of access controls is how they assign responsibility for certain systems and processes.
Next up in our exploration of access control systems are the mechanisms of authorization.

These protocols actually work closely together with authentication technologies. While one validates who the user is, the other determines what they're allowed to do.
Let's take a look at the next part of the authentication, authorization, and accounting framework that protects private information.

Earlier, we learned about the principle of least privilege.
Authorization is linked to the idea that access to information only lasts as long as needed.
Authorization systems are also heavily influenced by this idea in addition to another important security principle, the separation of duties.

Separation of duties is the principle that users should not be given levels of authorization that will allow them to misuse a system.
Separating duties reduces the risk of system failures and inappropriate behavior from users.

For example, a person responsible for providing customer service shouldn't also be authorized to rate their own performance. In this position, they could easily neglect their duties while continuing to give themselves high marks with no oversight.
Similarly, if one person was authorized to develop and test a security system, they are much more likely to be unaware of its weaknesses.

Both the principle of least privilege and the concept of separating duties apply to more than just people.
They apply to all systems including networks, databases, processes, and any other aspect of an organization.
Ultimately, authorization depends on a system or user's role.
When it comes to securing data over a network, there are a couple of frequently used access controls that you should be familiar with: HTTP basic auth and OAuth.

Have you ever wondered what the HTTP in web addresses stood for.
It stands for hypertext transfer protocol, which is how communications are established over network.
HTTP uses what is known as basic auth, the technology used to establish a user's request to access a server.
Basic auth works by sending an identifier every time a user communicates with a web page.

Some websites still use basic auth to tell whether or not someone is authorized to access information on that site.
However, their protocol is considered to be vulnerable to attacks because it transmits usernames and password openly over the network.
Most websites today use HTTPS instead, which stands for hypertext transfer protocol secure.
This protocol doesn't expose sensitive information, like access credentials, when communicating over the network.

Another secure authentication technology used today is OAuth.
OAuth is an open-standard authorization protocol that shares designated access between applications.
For example, you can tell Google that it's okay for another website to access your profile to create an account.
Instead of requesting and sending sensitive usernames and passwords over the network, OAuth uses API tokens to verify access between you and a service provider.

An API token is a small block of encrypted code that contains information about a user.
These tokens contain things like your identity, site permissions, and more.
OAuth sends and receives access requests using API tokens by passing them from a server to a user's device.

Let's explore what's going on behind the scenes.
When you authorize a site to create an account using your Google profile, all of Google's usual login protocols are still active.
If you have multi-factor authentication enabled on your account, and you should, you'll still have the security benefits that it provides.
API tokens minimize risks in a major way. 
These API tokens serve as an additional layer of encryption that helps to keep your Google password safe in the event of a breach on another platform. 

Basic auth and OAuth are just a couple of examples of authorization tools that are designed with the principles of least privilege and separation of duty in mind.
There are many other controls that help limit the risk of unauthorized access to information.
In addition to controlling access, it's also important to monitor it.

In our next video, we'll focus on the third and final part of the authentication, authorization, and accounting framework.

Why we audit user activity

Have you ever wondered if your employer is keeping a record of when you log into company systems?
Well, they are, if they're implementing the third and final function of the authentication, authorization, and accounting framework.

Accounting is the practice of monitoring the access logs of a system.
These logs contain information like who accessed the system, and when they accessed it, and what resources they used.

Security analysts use access logs a lot.
The data they contain is a helpful way to identify trends, like failed login attempts.
They're also used to uncover hackers who have gained access to a system, and for detecting an incident, like a data breach.

In this field, access logs are essential.
Oftentimes, analyzing them is the first procedure you'll follow when investigating a security event.
So, how do access logs compile all this useful information?
Let's examine this more closely.

Anytime a user accesses a system, they initiate what's called a session.
A session is a sequence of network HTTP basic auth requests and responses associated with the same user, like when you visit a website.
Access logs are essentially records of sessions that capture the moment a user enters a system until the moment they leave it.

Two actions are triggered when the session begins.
The first is the creation of a session ID.
A session ID is a unique token that identifies a user and their device while accessing the system.
Session IDs are attached to the user until they either close their browser or the session times out.

The second action that takes place at the start of a session is an exchange of session cookies between a server and a user's device.
A session cookie is a token that websites use to validate a session and determine how long that session should last.
When cookies are exchanged between your computer and a server, your session ID is read to determine what information the website should show you. 

Cookies make web sessions safer and more efficient.
The exchange of tokens means that no sensitive information, like usernames and passwords, are shared.
Session cookies prevent attackers from obtaining sensitive data.
However, there's other damage that they can do. 
With a stolen cookie, an attacker can impersonate a user using their session token.
This kind of attack is known as session hijacking.

Session hijacking is an event when attackers obtain a legitimate user's session ID.
During these kinds of attacks, cyber criminals impersonate the user, causing all sorts of harm.
Money or private data can be stolen.
If, for example, hijackers obtain a single sign-on credential from stolen cookies, they can even gain access to additional systems that otherwise seem secure.

This is one reason why accounting and monitoring session logs is so important.
Unusual activity on access logs can be an indication that information has been improperly accessed or stolen.
At the end of the day, accounting is how we gain valuable insight that makes information safer.

Tim: Finding purpose in protecting assets

My name is Tim and I work on the Detection and Response team at Google.
You can think of us as the smoke detectors and the fire departments at Google.
So what our job is, is to detect harmful activity that may affect Google and its users.
The stakes here are very, very high.
So imagine what you have on Google, whether it's docs, it's pictures, your financial information, some of your secrets.
Some things that you don't want anybody to know.
Those are the things that we're protecting.
Cybersecurity professionals are there to protect the most valuable assets of the company.
You'll be there to protect that, and that direct line from what you're doing to what the company feels is most important, most valuable, and protecting that, I think provides a lot of purpose for folks.
And provides a lot of motivation and provides the basis and the foundation for a very, very satisfying career.
Cybersecurity is a profoundly rewarding career.
It is a function that is critical at many, many companies and it is a career that is in high demand, and there is an absolute shortage of talented labor out there.
So from that aspect, if you're looking for a path to a viable, long term, rewarding career, this is as straight a path to that as you can imagine.

Identity and access management

Security is more than simply combining processes and technologies to protect assets. Instead, security is about ensuring that these processes and technologies are creating a secure environment that supports a defense strategy. A key to doing this is implementing two fundamental security principles that limit access to organizational resources:

Both principles typically support each other. For example, according to least privilege, a person who needs permission to approve purchases from the IT department shouldn't have the permission to approve purchases from every department. Likewise, according to separation of duties, the person who can approve purchases from the IT department should be different from the person who can input new purchases.

In other words, least privilege limits the access that an individual receives, while separation of duties divides responsibilities among multiple people to prevent any one person from having too much control.

Note: Separation of duties is sometimes referred to as segregation of duties.

Previously, you learned about the authentication, authorization, and accounting (AAA) framework. Many businesses used this model to implement these two security principles and manage user access. In this reading, you’ll learn about the other major framework for managing user access, identity and access management (IAM). You will learn about the similarities between AAA and IAM and how they're commonly implemented.

Identity and access management (IAM)

As organizations become more reliant on technology, regulatory agencies have put more pressure on them to demonstrate that they’re doing everything they can to prevent threats. Identity and access management (IAM) is a collection of processes and technologies that helps organizations manage digital identities in their environment. Both AAA and IAM systems are designed to authenticate users, determine their access privileges, and track their activities within a system.

Either model used by your organization is more than a single, clearly defined system. They each consist of a collection of security controls that ensure the right user is granted access to the right resources at the right time and for the right reasons. Each of those four factors is determined by your organization's policies and processes.

Note: A user can either be a person, a device, or software.

Authenticating users

To ensure the right user is attempting to access a resource requires some form of proof that the user is who they claim to be. In a video on authentication controls

, you learned that there are a few factors that can be used to authenticate a user:

Authentication is mainly verified with login credentials. Single sign-on (SSO), a technology that combines several different logins into one, and multi-factor authentication (MFA), a security measure that requires a user to verify their identity in two or more ways to access a system or network, are other tools that organizations use to authenticate individuals and systems.

Pro tip: Another way to remember this authentication model is: something you know, something you have, and something you are.

User provisioning

Back-end systems need to be able to verify whether the information provided by a user is accurate. To accomplish this, users must be properly provisioned. User provisioning is the process of creating and maintaining a user's digital identity. For example, a college might create a new user account when a new instructor is hired. The new account will be configured to provide access to instructor-only resources while they are teaching. Security analysts are routinely involved with provisioning users and their access privileges.

Pro tip: Another role analysts have in IAM is to deprovision users. This is an important practice that removes a user's access rights when they should no longer have them.

Granting authorization

If the right user has been authenticated, the network should ensure the right resources are made available. There are three common frameworks that organizations use to handle this step of IAM:

A system administrator deciding to grant users and an operating system access to data.

Mandatory Access Control (MAC)

MAC is the strictest of the three frameworks. Authorization in this model is based on a strict need-to-know basis. Access to information must be granted manually by a central authority or system administrator. For example, MAC is commonly applied in law enforcement, military, and other government agencies where users must request access through a chain of command. MAC is also known as non-discretionary control because access isn’t given at the discretion of the data owner.

A data owner choosing to grant specific users access to their data.

Discretionary Access Control (DAC)

DAC is typically applied when a data owner decides appropriate levels of access. One example of DAC is when the owner of a Google Drive folder shares editor, viewer, or commentor access with someone else.

A system administrator assigning users to specific roles that have predefined access levels.

Role-Based Access Control (RBAC)

RBAC is used when authorization is determined by a user's role within an organization. For example, a user in the marketing department may have access to user analytics but not network administration.

Access control technologies

Users often experience authentication and authorization as a single, seamless experience. In large part, that’s due to access control technologies that are configured to work together. These tools offer the speed and automation needed by administrators to monitor and modify access rights. They also decrease errors and potential risks.

An organization's IT department sometimes develops and maintains customized access control technologies on their own. A typical IAM or AAA system consists of a user directory, a set of tools for managing data in that directory, an authorization system, and an auditing system. Some organizations create custom systems to tailor them to their security needs. However, building an in-house solution comes at a steep cost of time and other resources.

Instead, many organizations opt to license third-party solutions that offer a suite of tools that enable them to quickly secure their information systems. Keep in mind, security is about more than combining a bunch of tools. It’s always important to configure these technologies so they can help to provide a secure environment.

Key takeaways

Controlling access requires a collection of systems and tools. IAM and AAA are common frameworks for implementing least privilege and separation of duties. As a security analyst, you might be responsible for user provisioning and collaborating with other IAM or AAA teams. Having familiarity with these models is valuable for helping organizations achieve their security objectives. They each ensure that the right user is granted access to the right resources at the right time and for the right reasons.

Resources for more information

The identity and access management industry is growing at a rapid pace. As with other domains in security, it’s important to stay informed.

© is a professional organization dedicated to sharing essential IAM industry knowledge.

Wrap-up; Terms and definitions from Course 5, Week 2

Our focus in this section was on a major theme of security: protecting assets.
A large part of this relates to privacy.
We should all enjoy the right to decide who can access our information.
As we learned, there are several controls in place that help secure assets.

We began the section by exploring effective data handling processes that are founded on the principle of least privilege.
We then explored the role of encryption and hashing and safeguarding information.
We explored how symmetric and asymmetric encryption works and how hashes further safeguard data from harm.

We then turned our attention to standard access controls. Properly authenticating and authorizing users is what maintaining the CIA triad of information is all about!

We used the AAA framework of security to take a detailed tour of identity and access management systems and the access controls that validate whether or not someone is who they claim to be.

Well done making it through the first half of the course!
You're making great progress so far, and I hope you keep it up.
Remember, your background and experiences are valuable in this field.
This combined with the concepts we're covering will make you a valuable contributor to any security team.

Up until this point, we've been exploring the defensive side of security, but security isn't all about planning ahead and waiting for something to happen.
In the next part of our journey,
we're going to continue developing a security mindset by taking a more proactive look at security from the perspective of attackers. 
I'll meet you there!

Glossary terms from week 2

Access controls: Security controls that manage access, authorization, and accountability of information

Algorithm: A set of rules used to solve a problem

Application programming interface (API) token: A small block of encrypted code that contains information about a user

Asymmetric encryption: The use of a public and private key pair for encryption and decryption of data   

Basic auth: The technology used to establish a user’s request to access a server

Bit: The smallest unit of data measurement on a computer

Brute force attack: The trial and error process of discovering private information

Cipher: An algorithm that encrypts information

Cryptographic key: A mechanism that decrypts ciphertext

Cryptography: The process of transforming information into a form that unintended readers can’t understand

Data custodian: Anyone or anything that’s responsible for the safe handling, transport, and storage of information

Data owner: The person that decides who can access, edit, use, or destroy their information

Digital certificate: A file that verifies the identity of a public key holder

Encryption: The process of converting data from a readable format to an encoded format

Hash collision: An instance when different inputs produce the same hash value

Hash function: An algorithm that produces a code that can’t be decrypted

Hash table: A data structure that's used to store and reference hash values

Identity and access management (IAM): A collection of processes and technologies that helps organizations manage digital identities in their environment 

Information privacy: The protection of unauthorized access and distribution of data

Multi-factor authentication (MFA): A security measure that requires a user to verify their identity in two or more ways to access a system or network

Non-repudiation: The concept that the authenticity of information can’t be denied

OAuth: An open-standard authorization protocol that shares designated access between applications

Payment Card Industry Data Security Standards (PCI DSS): A set of security standards formed by major organizations in the financial industry

Personally identifiable information (PII): Any information used to infer an individual's identity

Principle of least privilege: The concept of granting only the minimal access and authorization required to complete a task or function

Protected health information (PHI): Information that relates to the past, present, or future physical or mental health or condition of an individual

Public key infrastructure (PKI): An encryption framework that secures the exchange of online information

Rainbow table: A file of pre-generated hash values and their associated plaintext

Salting: An additional safeguard that’s used to strengthen hash functions

Security assessment: A check to determine how resilient current security implementations against threats

Security audit: A review of an organization's security controls, policies, and procedures against a set of expectations

Security controls: Safeguards designed to reduce specific security risks 

Separation of duties: The principle that users should not be given levels of authorization that would allow them to misuse a system

Session: A sequence of network HTTP basic auth requests and responses associated with the same user

Session cookie: A token that websites use to validate a session and determine how long that session should last

Session hijacking: An event when attackers obtain a legitimate user’s session ID

Session ID: A unique token that identifies a user and their device while accessing a system 

Single Sign-On (SSO): A technology that combines several different logins into one

Symmetric encryption: The use of a single secret key to exchange information

User provisioning: The process of creating and maintaining a user's digital identity

Welcome to week 3

Wow! We've covered a lot together!
It's hard to believe we've reached the midpoint of this course.
I hope you're getting a clearer picture of this exciting field and all the opportunities it has to offer.
And most importantly, I hope you're having fun doing it.

We've come a long ways from where we started.
When we began our journey together, we were introduced to the three building blocks of every security program: assets, threats, and vulnerabilities.
We focused a lot on assets early on and the wide range of things security professionals work to protect.

We then turned our attention to a core component of asset security, protecting assets.
You learned about the importance of guarding sensitive information.
You also learned about some security controls that protect information from being lost or stolen.

On the next part of our journey,
we're going to turn our focus to vulnerabilities.
Every asset we protect has a series of vulnerabilities, or flaws, that we need to be aware of.
Staying informed of these things is a critical part of protecting people and organizations from harm.

In this next part of the course, you'll gain an understanding of the vulnerability management process.
First, you'll explore a common approach to vulnerability management: the defense and depth model.
Then, you will learn about how vulnerabilities are documented in online libraries like the CVE list.
We'll discuss the attack surfaces security teams protect.
And lastly, you'll expand your attacker mindset by exploring the common attack vectors cybercriminals try to exploit.

Security analysts play an important role in identifying and correcting vulnerabilities in systems.
I know I'm excited to keep exploring,
are you?
Then let's go!

Vulnerability management

For every asset that needs protecting, there are dozens of vulnerabilities. 
Finding those vulnerabilities and fixing them before they become a problem is the key to keep an asset safe.

We've already covered what a vulnerability is.
Recall that a vulnerability is a weakness that can be exploited by a threat.
That word, can, is an important part of this description.
Why is that?
Let's explore that together to find out more.

Imagine I handed you an important document and asked you to keep it safe.
How would you do that?
Some of you might first think about locking it up in a safe place.
Behind this is the understanding that, because documents can be easily moved,
they are vulnerable to theft.
When other vulnerabilities come to mind, like how paper burns easily or doesn't resist water, you might add other protections.

Similar to this example, security teams plan to protect assets according to their vulnerabilities and how they can be exploited.
In security, an exploit is a way of taking advantage of a vulnerability.
Besides finding vulnerabilities, security planning relies a lot on thinking of exploits.

For example, there are burglars out there who want to cause harm.
Homes have vulnerable systems that can be exploited by a burglar.
An example are the windows. Glass is vulnerable to being broken.
A burglar can exploit this vulnerability by using a rock to break the window.
Thinking of this vulnerability and exploit ahead of time allows us to plan ahead.
We can have an alarm system in place to scare the burglar away and alert the police.

Security teams spend a lot of time finding vulnerabilities and thinking of how they can be exploited.
They do this with the process known as vulnerability management.
Vulnerability management is the process of finding and patching vulnerabilities.
Vulnerability management helps keep assets safe.
It's a method of stopping threats before they can become a problem.
Vulnerability management is a four step process.
The first step is to identify vulnerabilities.
The next step is to consider potential exploits of those vulnerabilities.
Third is to prepare defenses against threats. And finally, the fourth step is to evaluate those defenses.

When the last step ends, the process starts again.
Vulnerability management happens in a cycle.
It's a regular part of what security teams do because there are always new vulnerabilities to be concerned about.

This is exactly why a diverse set of perspectives is useful!
Having a wide range of backgrounds and experiences only strengthens security teams and their ability to find exploits.
However, even large and diverse security teams can't keep track of everything.

New vulnerabilities are constantly being discovered.
These are known as zero-day exploits.
A zero-day is an exploit that was previously unknown.
The term zero-day refers to the fact that the exploit is happening in real time
with zero days to fix it.
These kind of exploits are dangerous.
They represent threats that haven't been planned for yet.

For example, we can anticipate the possibility of a burglar breaking into our home.
We can plan for this type of threat by having defenses in place, like locks on the doors and windows.
A zero-day exploit would be something totally unexpected, like the lock on the door falling off from intense heat.
Zero-day exploits are things that don't normally come to mind.
For example, this might be a new form of spyware infecting a popular website.
When zero-day exploits happen, they can leave assets even more vulnerable to threats than they already are.

Vulnerability management is the process of finding vulnerabilities and fixing their exploits.
That's why the process is performed regularly at most organizations.
Perhaps the most important step of the process is identifying vulnerabilities.
We'll explore this step in more details next time we get together.
I'll meet you again then!

Defense in depth strategy

A layered defense is difficult to penetrate.
When one barrier fails, another takes its place to stop an attack.
Defense in depth is a security model that makes use of this concept.
It's a layered approach to vulnerability management that reduces risk.
Defense in depth is commonly referred to as the castle approach because it resembles the layered defenses of a castle.

In the Middle Ages, these structures were very difficult to penetrate.
They featured different defenses, each unique in its design, that
posed different challenges for attackers.
For example, a water-filled barrier called a moat usually formed a circle around the castle, preventing threats like large groups of attackers from reaching the castle walls.
The few soldiers that made it past the first layer of defense were then faced with
a new challenge, giant stone walls.
A vulnerability of these structures were that they could be climbed.
If attackers tried exploiting that weakness, guess what?
They were met with another layer of defense, watch towers, filled with defenders ready to shoot arrows and keep them from climbing!
Each level of defense of these medieval structures minimized the risk of attacks by identifying vulnerabilities and implementing a security control should one system fail. 

Defense in depth works in a similar way.
The defense in depth concept can be used to protect any asset.
It's mainly used in cybersecurity to protect information using a five layer design.
Each layer features a number of security controls that protect information as it
travels in and out of the model.

The first layer of defense in depth is the perimeter layer.
This layer includes some technologies that we've already explored,
like usernames and passwords.
Mainly, this is a user authentication layer that filters external access.
Its function is to only allow access to trusted partners to reach the next layer of defense.

Second, the network layer is more closely aligned with authorization.
The network layer is made up of other technologies like network firewalls and others.

Next, is the endpoint layer.
Endpoints refer to the devices that have access on a network.
They could be devices like a laptop, desktop, or a server.
Some examples of technologies that protect these devices are anti-virus software.

After that, we get to the application layer.
This includes all the interfaces that are used to interact with technology.
At this layer, security measures are programmed as part of an application.
One common example is multi-factor authentication.
You may be familiar with having to enter both your password and a code sent by SMS.
This is part of the application layer of defense.

And finally, the fifth layer of defense is the data layer.
At this layer, we've arrived at the critical data that must be protected, like personally identifiable information.
One security control that is important here in this final layer of defense is asset classification.

Like I mentioned earlier, information passes in and out of each of these five layers whenever it's exchanged over a network.
There are many more security controls aside from the few that I mentioned that are part
of the defense in depth model.
A lot of businesses design their security systems using the defense in-depth model.
Understanding this framework hopefully gives you a better sense of how an organization's security controls work together to protect important

Common vulnerabilities and exposures

We've discussed before that security is a team effort.
Did you know the group extends well beyond a single security team?
Protecting information is a global effort!

When it comes to vulnerabilities, there are actually online public libraries.
Individuals and organizations use them to share and document common vulnerabilities and exposures.
We've been focusing a lot on vulnerabilities.
Exposures are similar, but they have a key difference.
While a vulnerability is a weakness of a system, an exposure is a mistake
that can be exploited by a threat.

For example, imagine you're asked to protect an important document.
Documents are vulnerable to being misplaced.
If you laid the document down near an open window, it could be exposed to being blown away.

One of the most popular libraries of vulnerabilities and exposures is the CVE list.
The common vulnerabilities and exposures list, or CVE list, is an openly accessible dictionary
of known vulnerabilities and exposures.
It is a popular resource.

Many organizations use a CVE list to find ways to improve their defenses.
The CVE list was originally created by MITRE corporation in 1999.
MITRE is a collection of non-profit research and development centers.
They're sponsored by the US government.
Their focus is on improving security technologies around the world.

The main purpose of the CVE list is to offer a standard way of identifying
and categorizing known vulnerabilities and exposures.
Most CVEs in the list are reported by independent researchers, technology vendors, and ethical hackers, but anyone can report one.
Before a CVE can make it onto the CVE list, it first goes through a strict review process by a CVE Numbering Authority, or CNA.

A CNA is an organization that volunteers to analyze and distribute information on eligible CVEs.
All of these groups have an established record of researching vulnerabilities and demonstrating security advisory capabilities.
When a vulnerability or exposure is reported to them, a rigorous testing process takes place.

The CVE list tests four criteria that a vulnerability must have before it's assigned an ID.
First, it must be independent of other issues. 
In other words, the vulnerability should be able to be fixed without having to fix something else.
Second, it must be recognized as a potential security risk by whoever reports it.
Third, the vulnerability must be submitted with supporting evidence.
And finally, the reported vulnerability can only affect one codebase, or in other words, only one program's source code.
For instance, the desktop version of Chrome may be vulnerable, but the Android application may not be.
If the reported flaw passes all of these tests, it is assigned a CVE ID.

Vulnerabilities added to the CVE list are often reviewed by other online vulnerability databases.
These organizations put them through additional tests to reveal how significant the flaws are and to determine what kind of threat they pose. 
One of the most popular is the NIST National Vulnerabilities Database.

The NIST National Vulnerabilities Database uses what's known as the common vulnerability scoring system, or CVSS, which is
a measurement system that scores the severity of a vulnerability.
Security teams use CVSS as a way of calculating the impact a vulnerability could have on a system.
They also use them to determine how quickly a vulnerability should be patched.

The NIST National Vulnerabilities Database provides a base score of CVEs on a scale of 0-10.
Base scores reflect the moment a vulnerability is evaluated, so they don't change over time.
In general, a CVSS that scores below a 4.0 is considered to be low risk and doesn't require immediate attention.
However, anything above a 9.0 is considered to be a critical risk to company assets that should be addressed right away.

Security teams commonly use the CVE list and CVSS scores as part of their vulnerability management strategy.
These references provide recommendations for prioritizing security fixes, like installing software updates before patches.

Libraries like the CVE list, help organizations answer questions. Is a vulnerability dangerous to our business?
If so, how soon should we address it? 
These online libraries bring together diverse perspectives from across the world. 
Contributing to this effort is one of my favorite parts of working in this field.
Keep gaining experience, and I hope you'll participate too!

The OWASP Top 10

To prepare for future risks, security professionals need to stay informed. Previously, you learned about the CVE® list, an openly accessible dictionary of known vulnerabilities and exposures. The CVE® list is an important source of information that the global security community uses to share information with each other.

In this reading, you’ll learn about another important resource that security professionals reference, the Open Web Application Security Project, recently renamed Open Worldwide Application Security Project® (OWASP). You’ll learn about OWASP’s role in the global security community and how companies use this resource to focus their efforts.

What is OWASP?

OWASP (Open Web Application Security Project) is a nonprofit foundation that works to improve the security of software. OWASP is an open platform that security professionals from around the world use to share information, tools, and events that are focused on securing the web.

The OWASP Top 10

One of OWASP’s most valuable resources is the OWASP Top 10. The organization has published this list since 2003 as a way to spread awareness of the web’s most targeted vulnerabilities. The Top 10 mainly applies to new or custom made software. Many of the world's largest organizations reference the OWASP Top 10 during application development to help ensure their programs address common security mistakes.

Pro tip: OWASP’s Top 10 is updated every few years as technologies evolve. Rankings are based on how often the vulnerabilities are discovered and the level of risk they present.

Note: Auditors also use the OWASP Top 10 as one point of reference when checking for regulatory compliance.

Common vulnerabilities

Businesses often make critical security decisions based on the vulnerabilities listed in the OWASP Top 10. This resource influences how businesses design new software that will be on their network, unlike the CVE® list, which helps them identify improvements to existing programs. These are the most regularly listed vulnerabilities that appear in their rankings to know about:

Broken access control

Access controls limit what users can do in a web application. For example, a blog might allow visitors to post comments on a recent article but restricts them from deleting the article entirely. Failures in these mechanisms can lead to unauthorized information disclosure, modification, or destruction. They can also give someone unauthorized access to other business applications.

Cryptographic failures

Information is one of the most important assets businesses need to protect. Privacy laws such as General Data Protection Regulation (GDPR) require sensitive data to be protected by effective encryption methods. Vulnerabilities can occur when businesses fail to encrypt things like personally identifiable information (PII). For example, if a web application uses a weak hashing algorithm, like MD5, it’s more at risk of suffering a data breach.

Injection

Injection occurs when malicious code is inserted into a vulnerable application. Although the app appears to work normally, it does things that it wasn’t intended to do. Injection attacks can give threat actors a backdoor into an organization’s information system. A common target is a website’s login form. When these forms are vulnerable to injection, attackers can insert malicious code that gives them access to modify or steal user credentials.

Insecure design

Applications should be designed in such a way that makes them resilient to attack. When they aren’t, they’re much more vulnerable to threats like injection attacks or malware infections. Insecure design refers to a wide range of missing or poorly implemented security controls that should have been programmed into an application when it was being developed.

Security misconfiguration

Misconfigurations occur when security settings aren’t properly set or maintained. Companies use a variety of different interconnected systems. Mistakes often happen when those systems aren’t properly set up or audited. A common example is when businesses deploy equipment, like a network server, using default settings. This can lead businesses to use settings that fail to address the organization's security objectives.

Vulnerable and outdated components

Vulnerable and outdated components is a category that mainly relates to application development. Instead of coding everything from scratch, most developers use open-source libraries to complete their projects faster and easier. This publicly available software is maintained by communities of programmers on a volunteer basis. Applications that use vulnerable components that have not been maintained are at greater risk of being exploited by threat actors.

Identification and authentication failures

Identification is the keyword in this vulnerability category. When applications fail to recognize who should have access and what they’re authorized to do, it can lead to serious problems. For example, a home Wi-Fi router normally uses a simple login form to keep unwanted guests off the network. If this defense fails, an attacker can invade the homeowner’s privacy.

Software and data integrity failures

Software and data integrity failures are instances when updates or patches are inadequately reviewed before implementation. Attackers might exploit these weaknesses to deliver malicious software. When that occurs, there can be serious downstream effects. Third parties are likely to become infected if a single system is compromised, an event known as a supply chain attack.

A famous example of a supply chain attack is the SolarWinds cyber attack (2020)

where hackers injected malicious code into software updates that the company unknowingly released to their customers.

Security logging and monitoring failures

In security, it’s important to be able to log and trace back events. Having a record of events like user login attempts is critical to finding and fixing problems. Sufficient monitoring and incident response is equally important.

Server-side request forgery

Companies have public and private information stored on web servers. When you use a hyperlink or click a button on a website, a request is sent to a server that should validate who you are, fetch the appropriate data, and then return it to you.

A hacker using their victim's computer to steal data from a web server.

Server-side request forgeries (SSRFs) are when attackers manipulate the normal operations of a server to read or update other resources on that server. These are possible when an application on the server is vulnerable. Malicious code can be carried by the vulnerable app to the host server that will fetch unauthorized data.

Key takeaways

Staying informed and maintaining awareness about the latest cybersecurity trends can be a useful way to help defend against attacks and prepare for future risks in your security career. OWASP’s Top 10

is a useful resource where you can learn more about these vulnerabilities.

Open source intelligence

Cyber attacks can sometimes be prevented with the right information, which starts with knowing where your systems are vulnerable. Previously, you learned that the CVE® list and scanning tools are two useful ways of finding weaknesses. But, there are other ways to identify vulnerabilities and threats.

In this reading, you’ll learn about open-source intelligence, commonly known as OSINT. OSINT is the collection and analysis of information from publicly available sources to generate usable intelligence. It's commonly used to support cybersecurity activities, like identifying potential threats and vulnerabilities. You'll learn why open-source intelligence is gathered and how it can improve cybersecurity. You’ll also learn about commonly used resources and tools for gathering information and intelligence.

Information vs intelligence

The terms intelligence and information are often used interchangeably, making it easy to mix them up. Both are important aspects of cybersecurity that differ in their focus and objectives.

Information refers to the collection of raw data or facts about a specific subject. Intelligence, on the other hand, refers to the analysis of information to produce knowledge or insights that can be used to support decision-making.

For example, new information might be released about an update to the operating system (OS) that's installed on your organization's workstations. Later, you might find that new cyber threats have been linked to this new update by researching multiple cybersecurity news resources. The analysis of this information can be used as intelligence to guide your organization's decision about installing the OS updates on employee workstations.

In other words, intelligence is derived from information through the process of analysis, interpretation, and integration. Gathering information and intelligence are both important aspects of cybersecurity.

Intelligence improves decision-making 

Businesses often use information to gain insights into the behavior of their customers. Insights, or intelligence, can then be used to improve their decision making. In security, open-source information is used in a similar way to gain insights into threats and vulnerabilities that can pose risks to an organization.

OSINT plays a significant role in information security (InfoSec), which is the practice of keeping data in all states away from unauthorized users.

For example, a company's InfoSec team is responsible for protecting their network from potential threats. They might utilize OSINT to monitor online forums and hacker communities for discussions about emerging vulnerabilities. If they come across a forum post discussing a newly discovered weakness in a popular software that the company uses, the team can quickly assess the risk, prioritize patching efforts, and implement necessary safeguards to prevent an attack.

Here are some of the ways OSINT can be used to generate intelligence:

Collecting intelligence is sometimes part of the vulnerability management process. Security teams might use OSINT to develop profiles of potential targets and make data driven decisions on improving their defenses.

OSINT tools

There's an enormous amount of open-source information online. Finding relevant information that can be used to gather intelligence is a challenge. Information can be gathered from a variety of sources, such as search engines, social media, discussion boards, blogs, and more. Several tools also exist that can be used in your intelligence gathering process. Here are just a few examples of tools that you can explore:

There are numerous other OSINT tools that can be used to find specific types of information. Remember, information can be gathered from a variety of sources. Ultimately, it's your responsibility to thoroughly research any available information that's relevant to the problem you’re trying to solve.

Key takeaways

Gathering information and intelligence are important aspects of cybersecurity. OSINT is used to make evidence-based decisions that can be used to prevent attacks. There’s so much information available, which is why it's important for security professionals to be skilled with searching for information. Having familiarity with popular OSINT tools and resources will make your research easier when gathering information and collecting intelligence.

Vulnerability assessments

Our exploration of the vulnerability management process so far has been focused on a couple of topics.
We've discussed how vulnerabilities influence the design of defenses.
We've also talked about how common vulnerabilities are shared.
A topic we're yet to cover is how vulnerabilities are found in the first place. 
Weaknesses and flaws are generally found during a vulnerability assessment.
A vulnerability assessment is the internal review process of an organization's security systems.
These assessments work similar to the process of identifying and categorizing vulnerabilities on the CVE list.
The main difference is the organization's security team performs, evaluates, scores, and fixes them on their own.
Security analysts play a key role throughout this process. 
Overall, the goal of a vulnerability assessment is to identify weak points and prevent attacks.
They're also how security teams determine whether their security controls meet regulatory standards.
Organizations perform vulnerability assessments a lot.
Because companies have so many assets to protect, security teams sometimes need to select which area to focus on through vulnerability assessments. 
Once they decide what to focus on, vulnerability assessments typically follow a four-step process.
The first step is identification.
Here, scanning tools and manual testing are used to find vulnerabilities.
During the identification step, the goal is to understand the current state of a security system, like taking a picture of it. 
A large number of findings usually appear after identification.
The next step of the process is vulnerability analysis.
During this step, each of the vulnerabilities that were identified are tested.
By being a digital detective, the goal of vulnerability analysis is to find the source of the problem. 
The third step of the process is risk assessment.
During this step of the process, a score is assigned to each vulnerability.
This score is assigned based on two factors: how severe the impact would be if the vulnerability were to be exploited and the likelihood of this happening. 
Vulnerabilities uncovered during the first two steps of this process often outnumber the people available to fix them.
Risk assessments are a way of prioritizing resources to handle the vulnerabilities that need to be addressed based on their score. 
The fourth and final step of vulnerability assessment is remediation.
It's during this step that the vulnerabilities that can impact the organization are addressed.
Remediation occurs depending on the severity score assigned during the risk assessment step. 
This part of the process is normally a joint effort between the security staff and IT teams to come up with the best approach to fixing the vulnerabilities that were uncovered earlier.
Examples of remediation steps might include things like enforcing new security procedures, updating operating systems, or implementing system patches. 
Vulnerability assessments are great for identifying the flaws of a system.
Most organizations use them to search for problems before they happen.
But how do we know where to search?
When we get together again, we'll explore how companies figure this out.

Approaches to vulnerability scanning

Previously, you learned about a vulnerability assessment, which is the internal review process of an organization's security systems. An organization performs vulnerability assessments to identify weaknesses and prevent attacks. Vulnerability scanning tools are commonly used to simulate threats by finding vulnerabilities in an attack surface. They also help security teams take proactive steps towards implementing their remediation strategy.

Vulnerability scanners are important tools that you'll likely use in the field. In this reading, you’ll explore how vulnerability scanners work and the types of scans they can perform.

What is a vulnerability scanner?

A vulnerability scanner is software that automatically compares known vulnerabilities and exposures against the technologies on the network. In general, these tools scan systems to find misconfigurations or programming flaws.

Scanning tools are used to analyze each of the five attack surfaces that you learned about in the video about the defense in depth strategy:

  1. Perimeter layer, like authentication systems that validate user access

  2. Network layer, which is made up of technologies like network firewalls and others

  3. Endpoint layer, which describes devices on a network, like laptops, desktops, or servers

  4. Application layer, which involves the software that users interact with

  5. Data layer, which includes any information that’s stored, in transit, or in use

When a scan of any layer begins, the scanning tool compares the findings against databases of security threats. At the end of the scan, the tool flags any vulnerabilities that it finds and adds them to its reference database. Each scan adds more information to the database, helping the tool be more accurate in its analysis.

Note: Vulnerability databases are also routinely updated by the company that designed the scanning software.

Performing scans

Vulnerability scanners are meant to be non-intrusive. Meaning, they don’t break or take advantage of a system like an attacker would. Instead, they simply scan a surface and alert you to any potentially unlocked doors in your systems.

Note: While vulnerability scanners are non-intrusive, there are instances when a scan can inadvertently cause issues, like crash a system.

There are a few different ways that these tools are used to scan a surface. Each approach corresponds to the pathway a threat actor might take. Next, you can explore each type of scan to get a clearer picture of this. 

External vs. internal

External and internal scans simulate an attacker's approach.

External scans test the perimeter layer outside of the internal network. They analyze outward facing systems, like websites and firewalls. These kinds of scans can uncover vulnerable things like vulnerable network ports or servers.

Internal scans start from the opposite end by examining an organization's internal systems. For example, this type of scan might analyze application software for weaknesses in how it handles user input.

Authenticated vs. unauthenticated

Authenticated and unauthenticated scans simulate whether or not a user has access to a system.

Authenticated scans might test a system by logging in with a real user account or even with an admin account. These service accounts are used to check for vulnerabilities, like broken access controls.

Unauthenticated scans simulate external threat actors that do not have access to your business resources. For example, a scan might analyze file shares within the organization that are used to house internal-only documents. Unauthenticated users should receive "access denied" results if they tried opening these files. However, a vulnerability would be identified if you were able to access a file.

Limited vs. comprehensive

Limited and comprehensive scans focus on particular devices that are accessed by internal and external users.

Limited scans analyze particular devices on a network, like searching for misconfigurations on a firewall.

Comprehensive scans analyze all devices connected to a network. This includes operating systems, user databases, and more.

Pro tip: Discovery scanning should be done prior to limited or comprehensive scans. Discovery scanning is used to get an idea of the computers, devices, and open ports that are on a network.

Key takeaways

Finding vulnerabilities requires thinking of all possibilities. Vulnerability scans vary depending on the surfaces that an organization is evaluating. Usually, seasoned security professionals lead the effort of configuring and performing these types of scans to create a profile of a company’s security posture. However, analysts also play an important role in the process. The results of a vulnerability scan often lead to renewed compliance efforts, procedural changes, and system patching. Understanding the objectives of common types of vulnerability scans will help you participate in these proactive security exercises whenever possible.

Tip: To explore vulnerability scanner software commonly used in the cybersecurity industry, in your preferred browser enter search terms similar to “popular vulnerability scanner software” and/or “open source vulnerability scanner software used in cybersecurity”.

The importance of updates

At some point in time, you may have wondered, “Why do my devices constantly need updating?” For consumers, updates provide improvements to performance, stability, and even new features! But from a security standpoint, they serve a specific purpose. Updates allow organizations to address security vulnerabilities that can place their users, devices, and networks at risk.

In a video, you learned that updates fit into every security team’s remediation strategy. They usually take place after a vulnerability assessment, which is the internal review process of an organization's security systems. In this reading, you’ll learn what updates do, how they’re delivered, and why they’re important to cybersecurity.

Patching gaps in security

An outdated computer is a lot like a house with unlocked doors. Malicious actors use these gaps in security the same way, to gain unauthorized access. Software updates are similar to locking the doors to keep them out.

A patch update is a software and operating system update that addresses security vulnerabilities within a program or product. Patches usually contain bug fixes that address common security vulnerabilities and exposures.

Note: Ideally, patches address common vulnerabilities and exposures before malicious hackers find them. However, patches are sometimes developed as a result of a zero-day, which is an exploit that was previously unknown.

Common update strategies

When software updates become available, clients and users have two installation options:

  • Manual updates

  • Automatic updates

As you’ll learn, each strategy has both benefits and disadvantages.

Manual updates

A manual deployment strategy relies on IT departments or users obtaining updates from the developers. Home office or small business environments might require you to find, download, and install updates yourself. In enterprise settings, the process is usually handled with a configuration management tool. These tools offer a range of options to deploy updates, like to all clients on your network or a select group of users.  

Advantage: An advantage of manual update deployment strategies is control. That can be useful if software updates are not thoroughly tested by developers, leading to instability issues.

Disadvantage: A drawback to manual update deployments is that critical updates can be forgotten or disregarded entirely.

Automatic updates

An automatic deployment strategy takes the opposite approach. With this option, finding, downloading, and installing updates can be done by the system or application.

Pro tip: The Cybersecurity and Infrastructure Security Agency (CISA) recommends using automatic options whenever they’re available.

Certain permissions need to be enabled by users or IT groups before updates can be installed, or pushed, when they're available. It is up to the developers to adequately test their patches before release.

Advantage: An advantage to automatic updates is that the deployment process is simplified. It also keeps systems and software current with the latest, critical patches.

Disadvantage: A drawback to automatic updates is that instability issues can occur if the patches were not thoroughly tested by the vendor. This can result in performance problems and a poor user experience.

End-of-life software

Sometimes updates are not available for a certain type of software known as end-of-life (EOL) software. All software has a lifecycle. It begins when it’s produced and ends when a newer version is released. At that point, developers must allocate resources to the newer versions, which leads to EOL software. While the older software is still useful, the manufacturer no longer supports it. 

Note: Patches and updates are very different from upgrades. Upgrades refer to completely new versions of hardware or software that can be purchased.

CISA recommends discontinuing the use of EOL software because it poses an unfixable risk to systems. But, this recommendation is not always followed. Replacing EOL technology can be costly for businesses and individual users.

The risks that EOL software presents continues to grow as more connected devices enter the marketplace. For example, there are billions of Internet of Things (IoT) devices, like smart light bulbs, connected to home and work networks. In some business settings, all an attacker needs is a single unpatched device to gain access to the network and cause problems.

Key takeaways

Updating software and patching vulnerabilities is an important practice that everyone should participate in. Unfortunately, that’s not always the case. Many of the biggest cyber attacks in the world might have been prevented if systems were kept updated. One example is the WannaCry attack of 2017. The attack affected computers in more than 150 countries and caused an estimated $4 billion dollars in damages. Researchers have since found that WannaCry could have been prevented if the infected systems were up-to-date with a security patch that was made available months before the attack. Keeping software updated requires effort. However, the benefits they provide make them worthwhile.

Omad: My learning journey into cybersecurity

My name is Omad, I'm a corporate operations engineer at Google.
All I do is solve problems.
Googlers have problems, they need somebody to talk to, they usually talk to us.
If you asked me at 18 years old where I'd be now, I would have never told you I'd be working as a security engineer.
I would have told you I'd be working in a prison or I'd be working as a police officer in some township and just working a regular 9 to 5 shift.
After high school, I went on to work at Trenton State Prison, which is the only maximum security prison in New Jersey.
It was very stressful, but at the same time it's what I wanted to do at the time, or at least, that's what I thought I wanted to do at that time.
Five years after becoming a correction officer, I took the test again to be a sheriff's officer.
And on the last day of that academy, I decided this wasn't for me.
I was tired of being on my face doing pushups, I was tired of being yelled at.
I went home and I did what everybody else would do, do a Google search.
And I saw one for Google and it was a residency program, it was at the top of the list and I applied to it as a joke.
I even told my friends at the time, I'm just going to apply this, I'm not going to get in.
I had no reference, no connections, I knew nobody that worked at Google.
And within a couple of days a recruiter reached out to me, she said, "I think you're a great fit, you're a career changer.
I like your application, I like your resume, I think you'd be a great fit."
All the interviewers liked my background, they liked that I was self taught.
A lot of interviewers were able to relate to me.
They said, "Hey, I did the same thing."
From there, I was offered the job and I started my career.
When I was in orientation, somebody right next to me was actually the valedictorian of Princeton.
Here I am with no college degree, no exposure, no work experience, and I'm in the same company.
For career changers, what you have that other people don't have is a different mindset.
You're coming from experience outside of the technical space that you can transfer into the technical space.
Don't forget that we all have skillsets that can help you in the field.
That's what employers are looking for, that's what hiring managers are looking for.
One thing I learned as a correction officer is how to assess risk.
Every situation is different, just like the security space. Every risk is different. Every vulnerability is different. Every threat is different.
You can teach somebody tech, but you can't teach them a life of skills outside of tech.
If I were to go back and tell my 18 year old self one piece of advice, it would be, don't be scared, do it.
A career in cybersecurity is very fun.
It's very interesting.
It will work your brain.
It changed my life, it'll change yours as well.

Penetration testing

An effective security plan relies on regular testing to find an organization's weaknesses. Previously, you learned that vulnerability assessments, the internal review process of an organization's security systems, are used to design defense strategies based on system weaknesses. In this reading, you'll learn how security teams evaluate the effectiveness of their defenses using penetration testing.

Penetration testing

A penetration test, or pen test, is a simulated attack that helps identify vulnerabilities in systems, networks, websites, applications, and processes. The simulated attack in a pen test involves using the same tools and techniques as malicious actors in order to mimic a real life attack. Since a pen test is an authorized attack, it is considered to be a form of ethical hacking. Unlike a vulnerability assessment that finds weaknesses in a system's security, a pen test exploits those weaknesses to determine the potential consequences if the system breaks or gets broken into by a threat actor.

For example, the cybersecurity team at a financial company might simulate an attack on their banking app to determine if there are weaknesses that would allow an attacker to steal customer information or illegally transfer funds. If the pen test uncovers misconfigurations, the team can address them and improve the overall security of the app.  

Note: Organizations that are regulated by PCI DSS, HIPAA, or GDPR must routinely perform penetration testing to maintain compliance standards.

Learning from varied perspectives

These authorized attacks are performed by pen testers who are skilled in programming and network architecture. Depending on their objectives, organizations might use a few different approaches to penetration testing:

  • Red team tests simulate attacks to identify vulnerabilities in systems, networks, or applications.

  • Blue team tests focus on defense and incident response to validate an organization's existing security systems.

  • Purple team tests are collaborative, focusing on improving the security posture of the organization by combining elements of red and blue team exercises.

Red team tests are commonly performed by independent pen testers who are hired to evaluate internal systems. Although, cybersecurity teams may also have their own pen testing experts. Regardless of the approach, penetration testers must make an important decision before simulating an attack: How much access and information do I need?

Penetration testing strategies

There are three common penetration testing strategies: 

  • Open-box testing is when the tester has the same privileged access that an internal developer would have—information like system architecture, data flow, and network diagrams. This strategy goes by several different names, including internal, full knowledge, white-box, and clear-box penetration testing.

  • Closed-box testing is when the tester has little to no access to internal systems—similar to a malicious hacker. This strategy is sometimes referred to as external, black-box, or zero knowledge penetration testing.

  • Partial knowledge testing is when the tester has limited access and knowledge of an internal system—for example, a customer service representative. This strategy is also known as gray-box testing.

Closed box testers tend to produce the most accurate simulations of a real-world attack. Nevertheless, each strategy produces valuable results by demonstrating how an attacker might infiltrate a system and what information they could access.

Becoming a penetration tester

Penetration testers are in-demand in the fast growing field of cybersecurity. All of the skills you’re learning in this program can help you advance towards a career in pen testing:

  • Network and application security

  • Experience with operating systems, like Linux

  • Vulnerability analysis and threat modeling

  • Detection and response tools

  • Programming languages, like Python and BASH

  • Communication skills

Programming skills are very helpful in penetration testing because it's often performed on software and IT systems. With enough practice and dedication, cybersecurity professionals at any level can develop the skills needed to be a pen tester.

Bug bounty programs

Organization’s commonly run bug bounty programs which offer freelance pen testers financial rewards for finding and reporting vulnerabilities in their products. Bug bounties are great opportunities for amateur security professionals to participate and grow their skills. 

Pro tip: HackerOne is a community of ethical hackers where you can find active bug bounties to participate in.

Key takeaways

A major risk for organizations is malicious hackers breaking into their systems. Penetration testing is another way for organizations to secure their systems. Security teams use these simulated attacks to get a clearer picture of weaknesses in their defenses. There’s a growing need for specialized security professionals in this field. Even if you start out assisting with these activities, there’s plenty of opportunities to grow and learn the skills to be a pen tester.

Protect all entry points

There's a wide range of vulnerabilities and systems that need to be found.
Assessing those weaknesses is a time-consuming process.
To position themselves ahead of threats and make the most of their limited resources, companies start by understanding the environment surrounding their operations.
An important part of this is getting a sense of their attack surface. 
An attack surface is all the potential vulnerabilities that a threat actor could exploit.
Analyzing the attack surface is usually the first thing security teams do. 
For example, imagine being part of a security team of an old castle.
Your team would need to decide how to allocate resources to defenses.
Giant walls, stone towers, and wooden gates are a few common security controls of these structures.
While these are all designed to protect the assets inside from attacks, they don't exactly account for all the possibilities.
What if the castle were near the ocean?
If it were, these defenses would be vulnerable to long range attacks by ship.
A proper understanding of the attack surface would mean your security team equipped the castle with catapults that could deal with these kinds of threats. 
Modern organizations need to concern themselves with both a physical and digital attack surface.
The physical attack surface is made up of people and their devices.
This surface can be attacked from both inside and outside the organization, which makes it unique. 
For example, let's consider an unattended laptop in a public space, like a coffee shop.
The person responsible for it walked away while sensitive company information was visible on the screen.
This information is vulnerable to external threats, like a business competitor, who can easily record the information and exploit it.
An internal threat of this attack surface, on the other hand, is often angry employees.
These employees might share an organization's private information on purpose. 
In general, the physical attack surface should be filled with obstacles that deter attacks from happening.
We call this process security hardening.
Security hardening is the process of strengthening a system to reduce its vulnerabilities and attack surface.
In other words, hardening is the act of
minimizing the attack surface by limiting its points of entry. 
We do this a lot in security because the smaller the attack surface, the easier it is to protect.
In fact, some security controls that we've explored previously, like organization policies and access controls, are common ways that organizations harden their physical attack surface. 
The digital attack surface is a bit tougher to harden.
The digital attack surface includes everything that's beyond our organization's firewall.
In other words, it includes anything that connects to an organization online. 
In the past, organizations stored their data in a single location.
This mainly consisted of servers that were managed on-site.
Accessing the information stored on those servers required connecting to the network the workplace managed. 
These days, information is accessed outside of an organization's network because it's stored in the cloud.
Information can be accessed from anywhere in the world.
A person can be in one part of the world, fly to another place, and continue working. All while outside of their organization's network. 
Cloud computing has essentially expanded the digital attack surface.
Quicker access to information is something we all benefit from, but it comes with a cost.
Organizations of all sizes are under more pressure to defend against threats coming from different entry points. 
When we get together next time, we'll explore why this is such a challenge.

Approach cybersecurity with an attacker mindset

Cybersecurity is a continuously changing field. It's a fast-paced environment where new threats and innovative technologies can disrupt your plans at a moment's notice. As a security professional, it’s up to you to be prepared by anticipating change.

This all starts with identifying vulnerabilities. In a video, you learned about the importance of vulnerability assessments, the internal review process of an organization's security systems. In this reading, you will learn how you can use the findings of a vulnerability assessment proactively by analyzing them from the perspective of an attacker.

Being prepared for anything

Having a plan should things go wrong is important. But how do you figure out what to plan for? In this field, teams often conduct simulations of things that can go wrong as part of their vulnerability management strategy. One way this is done is by applying an attacker mindset to the weaknesses they discover.

Applying an attacker mindset is a lot like conducting an experiment. It's about causing problems in a controlled environment and evaluating the outcome to gain insights. Adopting an attacker mindset is a beneficial skill in security because it offers a different perspective about the challenges you're trying to solve. The insights you gain can be valuable when it's time to establish a security plan or modify an existing one.

Un groupe de personnes se sécurisant à l'aide de différentes technologies.

Simulating threats

One method of applying an attacker mindset is using attack simulations. These activities are normally performed in one of two ways: proactively and reactively. Both approaches share a common goal, which is to make systems safer.

  • Proactive simulations assume the role of an attacker by exploiting vulnerabilities and breaking through defenses. This is sometimes called a red team exercise.

  • Reactive simulations assume the role of a defender responding to an attack. This is sometimes called a blue team exercise.

Each kind of simulation is a team effort that you might be involved with as an analyst.

Proactive teams tend to spend more time planning their attacks than performing them. If you find yourself engaged in one of these exercises, your team will likely deploy a range of tactics. For example, they might persuade staff into disclosing their login credentials using fictitious emails to evaluate security awareness at the company.

On the other hand, reactive teams dedicate their efforts to gathering information about the assets they're protecting. This is commonly done with the assistance of vulnerability scanning tools. 

Scanning for trouble

You might recall that a vulnerability scanner is software that automatically compares existing common vulnerabilities and exposures against the technologies on the network. Vulnerability scanners are frequently used in the field. Security teams employ a variety of scanning techniques to uncover weaknesses in their defenses. Reactive simulations often rely on the results of a scan to weigh the risks and determine ways to remediate a problem.

For example, a team conducting a reactive simulation might perform an external vulnerability scan of their network. The entire exercise might follow the steps you learned in a video about vulnerability assessments:

  • Identification: A vulnerable server is flagged because it's running an outdated operating system (OS).

  • Vulnerability analysis: Research is done on the outdated OS and its vulnerabilities.

  • Risk assessment: After doing your due diligence, the severity of each vulnerability is scored and the impact of not fixing it is evaluated.

  • Remediation: Finally, the information that you’ve gathered can be used to address the issue.

During an activity like this, you’ll often produce a report of your findings. These can be brought to the attention of service providers or your supervisors. Clearly communicating the results of these exercises to others is an important skill to develop as a security professional.

Finding innovative solutions

Many security controls that you’ve learned about were created as a reactive response to risks. That’s because criminals are continually looking for ways to bypass existing defenses. Effectively applying an attacker mindset will require you to stay knowledgeable of security trends and emerging technologies.

Pro tip: Resources like NISTs National Vulnerability Database (NVD) can help you remain current on common vulnerabilities.

Key takeaways

Vulnerability assessments are an important part of security risk planning. As an analyst, you’ll likely participate in proactive and reactive simulations of these activities. Preparing yourself by researching common vulnerabilities only goes so far. It’s equally important that you stay informed about new technologies to be able to think with an innovative mindset.

Types of threat actors

Anticipating attacks is an important skill you’ll need to be an effective security professional. Developing this skill requires you to have an open and flexible mindset about where attacks can come from. Previously, you learned about attack surfaces, which are all the potential vulnerabilities that a threat actor could exploit.

Networks, servers, devices, and staff are examples of attack surfaces that can be exploited. Security teams of all sizes regularly find themselves defending these surfaces due to the expanding digital landscape. The key to defending any of them is to limit access to them.

In this reading, you’ll learn more about threat actors and the types of risks they pose. You’ll also explore the most common features of an attack surface that threat actors can exploit.

Threat actors

A threat actor is any person or group who presents a security risk. This broad definition refers to people inside and outside an organization. It also includes individuals who intentionally pose a threat, and those that accidentally put assets at risk. That’s a wide range of people!

Threat actors are normally divided into five categories based on their motivations:

  • Competitors refers to rival companies who pose a threat because they might benefit from leaked information.

  • State actors are government intelligence agencies.

  • Criminal syndicates refer to organized groups of people who make money from criminal activity.

  • Insider threats can be any individual who has or had authorized access to an organization’s resources. This includes employees who accidentally compromise assets or individuals who purposefully put them at risk for their own benefit.

  • Shadow IT refers to individuals who use technologies that lack IT governance. A common example is when an employee uses their personal email to send work-related communications.

In the digital attack surface, these threat actors often gain unauthorized access by hacking into systems. By definition, a hacker is any person who uses computers to gain access to computer systems, networks, or data. Similar to the term threat actor, hacker is also an umbrella term. When used alone, the term fails to capture a threat actor’s intentions.

Un groupe de pirates informatiques sur une carte du monde transportant des ressources.

Types of hackers

Because the formal definition of a hacker is broad, the term can be a bit ambiguous. In security, it applies to three types of individuals based on their intent:

  1. Unauthorized hackers 

  2. Authorized, or ethical, hackers

  3. Semi-authorized hackers

An unauthorized hacker, or unethical hacker, is an individual who uses their programming skills to commit crimes. Unauthorized hackers are also known as malicious hackers. Skill level ranges widely among this category of hacker. For example, there are hackers with limited skills who can’t write their own malicious software, sometimes called script kiddies. Unauthorized hackers like this carry out attacks using pre-written code that they obtain from other, more skilled hackers.

Authorized, or ethical, hackers refer to individuals who use their programming skills to improve an organization's overall security. These include internal members of a security team who are concerned with testing and evaluating systems to secure the attack surface. They also include external security vendors and freelance hackers that some companies incentivize to find and report vulnerabilities, a practice called bug bounty programs.

Semi-authorized hackers typically refer to individuals who might violate ethical standards, but are not considered malicious. For example, a hacktivist is a person who might use their skills to achieve a political goal. One might exploit security vulnerabilities of a public utility company to spread awareness of their existence. The intentions of these types of threat actors is often to expose security risks that should be addressed before a malicious hacker finds them.

Advanced persistent threats

Many malicious hackers find their way into a system, cause trouble, and then leave. But on some occasions, threat actors stick around. These kinds of events are known as advanced persistent threats, or APTs.

An advanced persistent threat (APT) refers to instances when a threat actor maintains unauthorized access to a system for an extended period of time. The term is mostly associated with nation states and state-sponsored actors. Typically, an APT is concerned with surveilling a target to gather information. They then use the intel to manipulate government, defense, financial, and telecom services.

Just because the term is associated with state actors does not mean that private businesses are safe from APTs. These kinds of threat actors are stealthy because hacking into another government agency or utility is costly and time consuming. APTs will often target private organizations first as a step towards gaining access to larger entities.

Access points

Each threat actor has a unique motivation for targeting an organization's assets. Keeping them out takes more than knowing their intentions and capabilities. It’s also important to recognize the types of attack vectors they’ll use.

For the most part, threat actors gain access through one of these attack vector categories:

  • Direct access, referring to instances when they have physical access to a system

  • Removable media, which includes portable hardware, like USB flash drives

  • Social media platforms that are used for communication and content sharing

  • Email, including both personal and business accounts

  • Wireless networks on premises

  • Cloud services usually provided by third-party organizations

  • Supply chains like third-party vendors that can present a backdoor into systems

Any of these attack vectors can provide access to a system. Recognizing a threat actor’s intentions can help you determine which access points they might target and what ultimate goals they could have. For example, remote workers are more likely to present a threat via email than a direct access threat.

Key takeaways

Defending an attack surface starts with thinking like a threat actor. As a security professional, it’s important to understand why someone would pose a threat to organizational assets. This includes recognizing that every threat actor isn’t intentionally out to cause harm.

It’s equally important to recognize the ways in which a threat actor might gain access to a system. Matching intentions with attack vectors is an invaluable skill as you continue to develop an attacker mindset.

Niru: Adopt an attacker mindset

Hi, I'm Niru, and I lead the red team at Google.
The red team at Google simulates attackers that are trying to hack into Google.
They function as a sparring partner for the blue team, that is, the teams that build security controls, detection pipelines, or respond to incidents.
So we help test all of those by simulating adversaries.
So we hack into Google to make it harder to hack into Google.
So it's like, hey, we found these issues with your system, now here are some recommendations we have, and how can we help you fix this?
Thinking like an attacker is approaching a problem like an adversary.
I generally have a predisposition to think like an attacker. [LAUGH].
It started when I was a kid and I used to play video games, and I used to ask, oh, do I have to beat the game in the way it's intended?
Do I have to get the objective in the standard path?
Looking at a system and asking the question, can I break into it?
How do I break into it?
What is likely to fail?
If it fails, what does that give me?
It's about taking apart systems and trying to understand it.
Threat modeling is integral to almost anything a security professional does.
It's about challenging assumptions.
It's about approaching things from a different perspective.
Rather than looking at the system from the perspective of a developer who is thinking about, how do I build the system in a way that works for people?
You're putting on the hat of an attacker and saying, if I looked at the system, how would I break into it?
It's important for all security professionals to think like an attacker because you code more defensively, you build things more defensively, and you break things more offensively.
And what that means is you're building in this resilience into the system, and you're building in all these safeguards that are going to help protect the data, the systems, and the people.
In order to build my attacker mindset, what I did is I would go pick people's brains.
What that means is I can grab time with them and say, hey, how do you approach the system?
What are the assumptions you're making?
How do you build out the security safeguards that you're thinking about?
My advice for people who are trying to build their own attacker mindset is go talk to people, be it in local meetups, in conferences,
find yourself a CTF group and play these competitions with them.
See how each person in the team approaches certain things and solves for it.
Almost everything we do on a daily basis is online these days, like banking is online, grocery shopping is online, the electricity grid, the water supplies.
All of this has happened in a short span of time, and now people are taking a step back and say, what does that mean for us?
And cybersecurity folks are the ones who help make sure these systems are locked down and protected against these adversaries.
If you're inquisitive, if you like taking things apart, if you like solving things, if you want to help make things secure, you should join cybersecurity.

Pathways through defenses

To defend against attacks, organizations need to have more than just the understanding of the growing digital landscape around them.
Positioning themselves ahead of a cyber threat also takes understanding the type of attacks that can be used against them. 
Last time, we began exploring how the cloud has expanded the digital attack surface that organizations protect.
As a result, cloud computing has led to an increase in the number attack vectors available. 
Attack vectors refer to the pathways attackers use to penetrate security defenses.
Like the doors and windows of a home, these pathways are the exploitable features
of an attack surface.
One example of an attack vector would be social media.
Another would be removable media, like a USB drive. 
Most people outside of security assume that cyber criminals are the only ones out there exploiting attack vectors.
While attack vectors are used by malicious hackers to steal information, other groups use them too.
For example, employees occasionally exploit attack vectors unintentionally.
This happens a lot with social media platforms.
Sometimes, employees post sensitive company news that shouldn't have been shared.
At times, this same kind of thing happens on purpose.
Social media platforms are also vectors that disgruntled employees use to intentionally share confidential information that can harm the company. 
We all treat attack vectors as critical risks to asset security.
Attackers typically put forth a lot of effort planning their attacks before carrying them out.
It's up to us as security professionals to put an even greater amount of effort into stopping them. 
Security teams do this by thinking of each vector with an attacker mindset.
This starts with a simple question, "how would we exploit this vector?"
We then go through a step-by-step process to answer our question. 
First, when practicing an attacker mindset, we identify a target.
This could be specific information, a system, a person, a group, or the organization itself. 
Next, we determine how the target can be accessed.
What information is available that an attacker might take advantage of to reach the target? 
Based on that information, the third step is to evaluate the attack vectors that can be exploited to gain entry. 
And finally, we find the tools and methods of attack.
What will the attackers use to carry this out? 
Along the way, practicing in attacker mindset provides valuable insight into the best security controls to implement and the vulnerabilities that need to be monitored.
Every organization has a long list of attack vectors to defend.
While there are a lot of ways to protect them, there are a few common rules for doing this. 
One key to defending attack vectors is educating users about security vulnerabilities.
These efforts are usually tied to an event.
For example, advising them about a new phishing exploit that is targeting users in the organization. 
Another rule is applying the principle of least privilege.
We've explored least privilege earlier in this section.
It's the idea that access rights should be limited to what's required to perform a task.
Like we previously explored, this practice closes multiple security holes inside an organization's attack surface. 
Next, using the right security controls and tools can go a long way towards defending attack vectors.
Even the most knowledgeable employees make security mistakes, like accidentally clicking on a malicious link in an email.
Having the right security tools in place, like antivirus software, helps to defend attack vectors more efficiently and reduce the risk of human error. 
Last but not least, is building a diverse security team. 
This is one of the best ways to reduce the risk of attack vectors and prevent future attacks. 
Your own unique perspective can greatly improve the security team's ability to apply an attacker's mindset and stay one step ahead of potential threats.
Keeping yourself informed is always important in this field.
You're already off to a great start,  so keep up the good work!

Fortify against brute force cyber attacks

Usernames and passwords are one of the most common and important security controls in use today. They’re like the door lock that organizations use to restrict access to their networks, services, and data. But a major issue with relying on login credentials as a critical line of defense is that they’re vulnerable to being stolen and guessed by attackers.

In a video, you learned that brute force attacks are a trial-and-error process of discovering private information. In this reading, you’ll learn about the many tactics and tools used by threat actors to perform brute force attacks. You’ll also learn prevention strategies that organizations can use to defend against them.

A matter of trial and error

One way of opening a closed lock is trying as many combinations as possible. Threat actors sometimes use similar tactics to gain access to an application or a network.

Attackers use a variety of tactics to find their way into a system:

Note: Besides access credentials, encrypted information can sometimes be brute forced using a technique known as exhaustive key search.

Each of these methods involve a lot of guess work. Brute forcing your way into a system can be a tedious and time consuming process—especially when it’s done manually. That’s why threat actors often use tools to conduct their attacks.

Tools of the trade

There are so many combinations that can be used to create a single set of login credentials. The number of characters, letters, and numbers that can be mixed together is truly incredible. When done manually, it could take someone years to try every possible combination.

Instead of dedicating the time to do this, attackers often use software to do the guess work for them. These are some common brute forcing tools:

Sometimes, security professionals use these tools to test and analyze their own systems. They each serve different purposes. For example, you might use Aircrack-ng to test a Wi-Fi network for vulnerabilities to brute force attack.

Prevention measures

Organizations defend against brute force attacks with a combination of technical and managerial controls. Each make cracking defense systems through brute force less likely:

Technologies, like multi-factor authentication (MFA), reinforce each login attempt by requiring a second or third form of identification. Other important tools are CAPTCHA and effective password policies.

Hashing and salting

Hashing converts information into a unique value that can then be used to determine its integrity. Salting is an additional safeguard that’s used to strengthen hash functions. It works by adding random characters to data, like passwords. This increases the length and complexity of hash values, making them harder to brute force and less susceptible to dictionary attacks.

Multi-factor authentication (MFA)

Multi-factor authentication (MFA) is a security measure that requires a user to verify their identity in two or more ways to access a system or network. MFA is a layered approach to protecting information. MFA limits the chances of brute force attacks because unauthorized users are unlikely to meet each authentication requirement even if one credential becomes compromised.

CAPTCHA

CAPTCHA stands for Completely Automated Public Turing test to tell Computers and Humans Apart. It is known as a challenge-response authentication system. CAPTCHA asks users to complete a simple test that proves they are human and not software that’s trying to brute force a password.

Here are common CAPTCHA examples:

Comparaison côte à côte d'un CAPTCHA basé sur le texte et d'un CAPTCHA basé sur l'image pour vérifier si un utilisateur est h

There are two types of CAPTCHA tests. One scrambles and distorts a randomly generated sequence of letters and/or numbers and asks users to enter them into a text box. The other test asks users to match images to a randomly generated word. You’ve likely had to pass a CAPTCHA test when accessing a web service that contains sensitive information, like an online bank account.

Password policy

Organizations use these managerial controls to standardize good password practices across their business. For example, one of these policies might require users to create passwords that are at least 8 characters long and feature a letter, number, and symbol. Other common requirements can include password lockout policies. For example, a password lockout can limit the number of login attempts before access to an account is suspended and require users to create new, unique passwords after a certain amount of time.

The purpose of each of these requirements is to create more possible password combinations. This lengthens the amount of time it takes an attacker to find one that will work. The National Institute of Standards and Technology (NIST) Special Publication 800-63B provides detailed guidance that organizations can reference when creating their own password policies.

Key takeaways

Brute force attacks are simple yet reliable ways to gain unauthorized access to systems. Generally, the stronger a password is, the more resilient it is to being cracked. As a security professional, you might find yourself using the tools described above to test the security of your organization's systems. Recognizing the tactics and tools used to conduct a brute force attack is the first step towards stopping attackers.

Wrap-up; Terms and definitions from Course 5, Week 3

Here we are at the end of this section! Can you believe it?
I had so much fun exploring the world of vulnerabilities.
I hope you felt the same.
More importantly, I hope you got a better sense of how complex a landscape the digital world is.
This environment is filled with gaps that attackers can use to gain unauthorized access to assets, making it a challenge to defend. 
We've explored a lot of information this time around, so let's quickly recap what we've covered. 
You learned about the vulnerability management process, starting with the defense-in-depth model.
You learned about the layers of this security framework and how each of them work together to build a stronger defense. 
You then learned about the CVE list that's used to find cataloged vulnerabilities.
This is a great addition to your growing security toolbox. 
After that, you learned of the attack surfaces that businesses protect.
We discussed physical and digital surfaces and the challenges of defending the cloud. 
We finished up by exploring common attack vectors, where you learned how security teams use an attacker mindset to identify the security gaps that cyber criminals try to exploit.
 Every one of the vulnerabilities that we've discussed so far is faced with a number of threats. 
When we get back together, we're going to expand our attacker mindset even further by exploring specific type of attacks that cybercriminals commonly use.
We'll look at things like malware and the techniques attackers use to compromise defense systems.
By exploring how these tools and tactics work, you'll gain a clearer understanding of the threats they pose.
We'll then wrap up by investigating how security teams stop these threats from damaging our organizations' operations, their reputation, and most importantly, their customers and employees. 
You've done a fantastic job getting to this point.
When you're ready, let's finish the journey together.
I'm looking forward to being back with you again.

Glossary terms from week 3 

Advanced persistent threat (APT): An instance when a threat actor maintains unauthorized access to a system for an extended period of time 

Attack surface: All the potential vulnerabilities that a threat actor could exploit

Attack tree: A diagram that maps threats to assets

Attack vector: The pathways attackers use to penetrate security defenses 

Bug bounty: Programs that encourage freelance hackers to find and report vulnerabilities

Common Vulnerabilities and Exposures (CVE®) list: An openly accessible dictionary of known vulnerabilities and exposures

Common Vulnerability Scoring System (CVSS): A measurement system that scores the severity of a vulnerability

CVE Numbering Authority (CNA): An organization that volunteers to analyze and distribute information on eligible CVEs

Defense in depth: A layered approach to vulnerability management that reduces risk

Exploit: A way of taking advantage of a vulnerability

Exposure: A mistake that can be exploited by a threat

Hacker: Any person who uses computers to gain access to computer systems, networks, or data

MITRE: A collection of non-profit research and development centers

Security hardening: The process of strengthening a system to reduce its vulnerability and attack surface

Threat actor: Any person or group who presents a security risk

Vulnerability: A weakness that can be exploited by a threat

Vulnerability assessment: The internal review process of a company’s security systems

Vulnerability management: The process of finding and patching vulnerabilities

Vulnerability scanner: Software that automatically compares existing common vulnerabilities and exposures against the technologies on the network

Zero-day: An exploit that was previously unknown

Welcome to week 4; threats and social engeneering

Here we are! The final section of the course. What are amazing job you've done so far!
Putting in the time, dedication, and hard work to get to this point is definitely something to celebrate.
But we're not through yet.
As we near the end of this course, now's the time to focus and finish strong.
Let's turn our attention to threats. 
We've already explored assets, vulnerabilities, and the controls used to protect both.
A common theme between those two topics has been the wide range of assets and vulnerabilities out there.
The world of threats is no different. 
If you recall, threats are any circumstance or event that can negatively impact assets.
In this part of the course, you're going to expand your security mindset by getting a high-level view of the most dangerous threats facing organizations today. 
First, we're going to begin by exploring social engineering tactics, psychological tricks that attackers use to gain unauthorized access to assets. 
Next, we'll explore a common type of threat that's been around since the start of personal computers, malware.
We're going to spend some time investigating the major types of malware. 
After that, we'll turn our attention to web-based exploits.
Most organizations these days operate in a digital space, and many of them are new to it.
In this section of the course, you're going to learn about some of the most common threats that organizations face online. 
Finally, after exploring common threats that organizations deal with, we're going to wrap up by exploring
the threat modeling process. 
Understanding threats is essential for a security analyst, and there's a lot to cover.
So, let's get started!

The criminal art of persuasion

When you hear the word "cybercriminal", what comes to mind?
You may imagine a hacker hunched over a computer in a dark room.
If this is what came to mind, you're not alone.
In fact, this is what most people outside of security think of.
But online criminals aren't always that different from those operating in the real world. Malicious hackers are just one type of online criminal.
They are a specific kind that relies on sophisticated computer programming skills to pull off their attacks.
There are other ways to commit crimes that don't require programming skills.
Sometimes, criminals rely on a more traditional approach, manipulation. 
Social engineering is a manipulation technique that exploits human error to gain private information, access, or valuables. 
These tactics trick people into breaking normal security procedures on the attacker's behalf.
This can lead to data exposures, widespread malware infections, or unauthorized access to restricted systems.
Social engineering attacks can happen anywhere.
They happen online, in-person, and through other interactions.
Threat actors use many different tactics to carry out their attacks. 
Some attacks can take a matter of seconds to perform.
For example, someone impersonating tech support asks an employee for their password to fix their computer.
Other attacks can take months or longer, such as threat actors monitoring an employee's social media.
The employee might post a comment saying they've gotten a temporary position in a new role at the company.
An attacker might use an opportunity like this to target the temporary worker, who is likely to be less knowledgeable about security procedures. 
Regardless of the time-frame, knowing what to look for can help you quickly identify and stop an attack in its tracks. 
There are multiple stages of social engineering attacks.
The first is usually to prepare.
At this stage, attackers gather information about their target.
Using the intel, they'll determine the best way to exploit them. 
In the next stage, attackers establish trust.
This is often referred to as pretexting.
Here, attackers use the information they gathered earlier to open a line of communication.
They'll typically disguise themselves to trick their target into a false sense of trust. 
After that, attackers use persuasion tactics.
This stage is where the earlier preparation really matters.
This is when the attacker manipulates their target into volunteering information.
Sometimes they do this by using specific vocabulary that makes them sound like a member of the organization. 
The final stage of the process is to disconnect from the target.
After they collect the information they want, attackers break communication with their target.
They disappear to cover their tracks. 
Criminals who use social engineering are stealthy.
The digital world has expanded their capabilities.
It's also created more ways for them to go unnoticed.
Still, there are ways that we can prevent their attacks. 
Implementing managerial controls like policies, standards, and procedures, are one of the first lines of defence. 
For example, businesses often follow the patch management standard defined in NIST Special Publication 800-40.
These standards are used to create procedures for updating operating systems, applications, and firmware that can be exploited. 
Staying informed of trends is also a major priority for any security professional.
An even better defence against social engineering attacks is sharing what you know with others.
Attackers play on our natural curiosity and desire to help one another.
Their hope is that targets won't think too hard about what's going on.
Teaching the signs of attack to others goes a long way towards preventing threats. 
Social engineering is a threat to the assets and privacy of both individuals and organizations.
Malicious attackers use a variety of tactics to confuse and manipulate their targets.
When we get back together next time, we're going to explore one of the most commonly used techniques that's a major problem for organizations of all sizes.

Social engineering tactics

Social engineering attacks are a popular choice among threat actors. That’s because it’s often easier to trick people into providing them with access, information, or money than it is to exploit a software or network vulnerability.

As you might recall, social engineering is a manipulation technique that exploits human error to gain private information, access, or valuables. It's an umbrella term that can apply to a broad range of attacks. Each technique is designed to capitalize on the trusting nature of people and their willingness to help. In this reading, you will learn about specific social engineering tactics to watch out for. You’ll also learn ways that organizations counter these threats.

Social engineering risks

Un pirate informatique se faisant passer pour une personne connue de sa cible.

Social engineering is a form of deception that takes advantage of the way people think. It preys on people’s natural feelings of curiosity, generosity, and excitement. Threat actors turn those feelings against their targets by affecting their better judgment. Social engineering attacks can be incredibly harmful because of how easy they can be to accomplish.

One of the highest-profile social engineering attacks that occurred in recent years was the Twitter Hack of 2020. During that incident, a group of hackers made phone calls to Twitter employees pretending to be from the IT department. Using this basic scam, the group managed to gain access to the organization’s network and internal tools. This allowed them to take over the accounts of high-profile users, including politicians, celebrities, and entrepreneurs.

Attacks like this are just one example of the chaos threat actors can create using basic social engineering techniques. These attacks present serious risks because they don’t require sophisticated computer skills to perform. Defending against them requires a multi-layered approach that combines technological controls with user awareness.

Signs of an attack

Oftentimes, people are unable to tell that an attack is happening until it's too late. Social engineering is such a dangerous threat because it typically allows attackers to bypass technological defences that are in their way. Although these threats are difficult to prevent, recognizing the signs of social engineering is a key to reducing the likelihood of a successful attack.

These are common types of social engineering to watch out for:

Attackers might use any of these techniques to gain unauthorized access to an organization. Everyone is vulnerable to them, from entry-level employees to senior executives. However, you can reduce the risks of social engineering attacks at any business by teaching others what to expect.

Encouraging caution

Spreading awareness usually starts with comprehensive security training. When it comes to social engineering, there are three main areas to focus on when teaching others:

Pro tip: Implementing technologies like firewalls, multi-factor authentication (MFA), block lists, email filtering, and others helps layers the defenses should someone make a mistake.

Ideally, security training extends beyond employees. Educating customers about social engineering threats is also a key to mitigating these threats. And security analysts play an important part in promoting safe practices. For example, a big part of an analyst's job is testing systems and documenting best practices for others at an organization to follow.

Key takeaways

People’s willingness to help one another and their trusting nature is what makes social engineering such an appealing tactic for criminals. It just takes one act of kindness or a momentary lapse in judgment for an attack to work. Criminals go to great lengths to make their attacks difficult to detect. They rely on a variety of manipulation techniques to trick their targets into granting them access. For that reason, implementing effective controls and recognizing the signs of an attack go a long way towards preventing threats.

Resources for more information

Here are two additional resources to review that will help you continue developing your understanding of social engineering trends and security practices: 

Phishing for information

Cybercriminals prefer attacks that do the most amount of damage with the least amount of effort.
One of the most popular forms of social engineering that meets this description is phishing. 
Phishing is the use of digital communications to trick people into revealing sensitive data or deploying malicious software.
Phishing leverages many communication technologies, but the term is mainly used to describe attacks that arrive by email. 
Phishing attacks don't just affect individuals.
They are also harmful to organizations.
A single employee that falls for one of these tricks can give malicious attackers access to systems.
Once inside, attackers can exploit sensitive data like customer names and product secrets. 
Attackers who carry out these attacks commonly use phishing kits.
A phishing kit is a collection of software tools needed to launch a phishing campaign.
People with little technical background can use one of these kits. 
Each of the tools inside are designed to avoid detection.
As a security professional, you should be aware of the three main tools inside a phishing kit, so that you can quickly identify when they're being used and put a stop to it. 
The first is malicious attachments.
These are files that are infected and can cause harm to the organization's systems. 
Phishing kits also include fake-data collection forms.
These forms look like legitimate forms, like a survey.
Unlike a real survey, they ask for sensitive information that isn't normally asked for in an email. 
The third resource they include are fraudulent web links.
These open to malicious web pages that are designed to look like trusted brands.
Unlike actual websites, these fraudulent sites are built to steal information, like login credentials. 
Cybercriminals can use these tools to launch a phishing attack in many forms.
The most common is through malicious emails.
However, they can use them in other forms of communication too.
Most recently, cybercriminals are using smishing and vishing to trick people into revealing private information. 
Smishing is the use of text messages to obtain sensitive information or to impersonate a known source.
You've probably received these types of messages before.
Not only are smishing messages annoying to receive, they're also difficult to prevent. That's why some attackers send them.
Some smishing messages are easy to detect.
They might show signs of being malicious like promising a cash reward for clicking an attached link that shouldn't be clicked.
Other times, smishing is hard to spot.
Attackers sometimes use local area codes to appear legitimate.
Some hackers can even send messages disguised as friends and families of their target to fool them into disclosing sensitive information. 
Vishing is the exploitation of electronic voice communication to obtain sensitive information or impersonate a known source.
During vishing attacks, criminals pretend to be someone they're not.
For example, attackers might call pretending to be a company representative.
They might claim that there's a problem with your account.
And they can offer to fix it if you provide them with sensitive information. 
Most organizations use a few basic security measures to prevent these and any other types of phishing attacks from becoming a problem.
For example, anti-phishing policies spread awareness and encourage users to follow data security procedures correctly.
Employee training resources also help inform employees about things to look for when an email looks suspicious. 
Another line of defence against phishing is securing email inboxes.
Email filters are commonly used to keep harmful messages from reaching users.
For example, specific email addresses can be blocked using a block-list.
Organizations often use other filters, like allow-lists, to specify IP addresses that are approved to send mail within the company. 
Organizations also use intrusion prevention systems to look for unusual patterns in email traffic.
Security analysts use monitoring tools like this to spot suspicious emails, quarantine them, and produce a log of events. 
Phishing campaigns are popular and dangerous forms of social engineering that organizations of all sizes need to deal with.
Just a single compromised password that an attacker can get their hands on can lead to a costly data breach.
Now that you're familiar with the tools these attackers use, you're better equipped to spot phishing and prevent it.

Types of phishing

Phishing is one of the most common types of social engineering, which are manipulation techniques that exploit human error to gain private information, access, or valuables. Previously, you learned how phishing is the use of digital communications to trick people into revealing sensitive data or deploying malicious software. 

Sometimes, phishing attacks appear to come from a trusted person or business. This can lead unsuspecting recipients into acting against their better judgment, causing them to break security procedures. In this reading, you’ll learn about common phishing tactics used by attackers today.

Des ressources sensibles sont extraites d'un ordinateur à l'aide de divers hameçons.

The origins of phishing

Phishing has been around since the early days of the internet. It can be traced back to the 1990s. At the time, people across the world were coming online for the first time. As the internet became more accessible it began to attract the attention of malicious actors. These malicious actors realized that the internet gave them a level of anonymity to commit their crimes.

Early persuasion tactics

One of the earliest instances of phishing was aimed at a popular chat service called AOL Instant Messenger (AIM). Users of the service began receiving emails asking them to verify their accounts or provide personal billing information. The users were unaware that these messages were sent by malicious actors pretending to be service providers.

This was one of the first examples of mass phishing, which describes attacks that send malicious emails out to a large number of people, increasing the likelihood of baiting someone into the trap.

During the AIM attacks, malicious actors carefully crafted emails that appeared to come directly from AOL. The messages used official logos, colors, and fonts to trick unsuspecting users into sharing their information and account details.

Attackers used the stolen information to create fraudulent AOL accounts they could use to carry out other crimes anonymously. AOL was forced to adapt their security policies to address these threats. The chat service began including messages on their platforms to warn users about phishing attacks.

How phishing has evolved

Phishing continued evolving at the turn of the century as businesses and newer technologies began entering the digital landscape. In the early 2000s, e-commerce and online payment systems started to become popular alternatives to traditional marketplaces. The introduction of online transactions presented new opportunities for attackers to commit crimes.

A number of techniques began to appear around this time period, many of which are still used today. There are five common types of phishing that every security analyst should know:

Since the early days of phishing, email attacks remain the most common types that are used. While they were originally used to trick people into sharing access credentials and credit card information, email phishing became a popular method to infect computer systems and networks with malicious software.

In late 2003, attackers around the world created fraudulent websites that resembled businesses like eBay and PayPal™. Mass phishing campaigns to distribute malicious programs were also launched against e-commerce and banking sites.

Profils de réseaux sociaux extraits d'un ordinateur.

Starting in the 2010s, attackers began to shift away from mass phishing attempts that relied on baiting unsuspecting people into a trap. Leveraging new technologies, criminals began carrying out what’s known as targeted phishing attempts. Targeted phishing describes attacks that are sent to specific targets using highly customized methods to create a strong sense of familiarity.

A type of targeted phishing that evolved in the 2010s is angler phishing. Angler phishing is a technique where attackers impersonate customer service representatives on social media. This tactic evolved from people’s tendency to complain about businesses online. Threat actors intercept complaints from places like message boards or comment sections and contact the angry customer via social media. Like the AIM attacks of the 1990s, they use fraudulent accounts that appear similar to those of actual businesses. They then trick the angry customers into sharing sensitive information with the promise of fixing their problem.

Key takeaways

Phishing tactics have become very sophisticated over the years. Unfortunately, there isn't a perfect solution that prevents these attacks from happening. Tactics, like email phishing that started in the last century, remain an effective and profitable method of attack for criminals online today.

There isn’t a technological solution to prevent phishing entirely. However, there are many ways to reduce the damage from these attacks when they happen. One way is to spread awareness and inform others. As a security professional, you may be responsible for helping others identify forms of social engineering, like phishing. For example, you might create training programs that educate employees about topics like phishing. Sharing your knowledge with others is an important responsibility that helps build a culture of security.

Resources for more information

Staying up-to-date on phishing threats is one of the best things you can do to educate yourself and help your organization make smarter security decisions.

Malicious software

People and computers are very different from one another.
There's one way that we're alike.
You know how? We're both vulnerable to getting an infection.
While humans can be infected by a virus that causes a cold or flu,
computers can be infected by malware. 
Malware is software designed to harm devices or networks.
Malware, which is short for malicious software,
can be spread in many ways.
For example, it can be spread through an infected USB drive.
Or also commonly spread between computers online. 
Devices and systems that are connected to the internet are especially vulnerable to infection.
When a device becomes infected, malware interferes with its normal operations.
Attackers use malware to take control of the infected system without the user's knowledge or permission.
Malware has been a threat to people and organizations for a long time.
Attackers have created many different strains of malware.
They all vary in how they're spread. 
Five of the most common types of malware are a virus, worm, trojan, ransomware, and spyware.
Let's take a look at how each of them work. 
A virus is malicious code written to interfere with computer operations and cause damage to data and software.
Viruses typically hide inside of trusted applications.
When the infected program is launched,
the virus clones itself and spreads to other files on the device.
An important characteristic of viruses is that they have to be activated by the user to start the infection. 
The next kind of malware doesn't have this limitation.
A worm is malware that can duplicate and spread itself across systems on its own.
While viruses require users to perform an action like opening a file to duplicate,
worms use an infected device as a host.
They scan the connected network for other devices.
Worms then infect everything on the network without requiring an action to trigger the spread. 
Viruses and worms are delivered through phishing emails and other methods before they infect a device.
Making sure you click links only from trusted sources is one way to avoid these types of infection.
However, attackers have designed another form of malware that can get past this precaution. 
A trojan, or Trojan horse, is malware that looks like a legitimate file or program.
The name is a reference to an ancient Greek legend that's set in the city of Troy.
In Troy, a group of soldiers hid inside a giant wooden horse that was presented as a gift to their enemies.
It was accepted and brought inside the city walls.
Later that evening, the soldiers inside of the horse climbed out and attacked the city.
Like this ancient tale, attackers design trojans to appear harmless.
This type of malware is typically disguised as files or useful applications to trick their target into installing them. 
Attackers often use trojans to gain access and install another kind of malware called ransomware.
Ransomware is a type of malicious attack where attackers encrypt an organization's data and demand payment to restore access.
These kind of attacks have become very common these days.
A unique feature of ransomware attacks is that they make themselves known to their targets.
Without doing this, they couldn't collect the money they demand.
Normally, they decrypt the hidden data as soon as the sum of money is paid.
Unfortunately, there's no guarantee they won't return to demand more. 
The last type of malware I want to mention is spyware.
Spyware is malware that's used to gather and sell information without consent.
Consent is a keyword in this case. 
Organizations also collect information about their customers, like their browsing habits and purchase history.
However, they always give their customers the ability to opt out.
Cybercriminals, on the other hand,
use spyware to steal information. They use spyware attacks to collect data like login credentials, account PINs, and other types of sensitive information for their own personal gain. 
There are many other types of malware besides these and new forms are always evolving.
They all pose a serious risk to individuals and organizations.
Next time, we'll explore how security teams detect and remove these kinds of threats.

An introduction to malware

Previously, you learned that malware is software designed to harm devices or networks. Since its first appearance on personal computers decades ago, malware has developed into a variety of strains. Being able to identify different types of malware and understand the ways in which they are spread will help you stay alert and be informed as a security professional.

Icônes représentant différents types de logiciels malveillants

Virus

A virus is malicious code written to interfere with computer operations and cause damage to data and software. This type of malware must be installed by the target user before it can spread itself and cause damage. One of the many ways that viruses are spread is through phishing campaigns where malicious links are hidden within links or attachments.

Worm

A worm is malware that can duplicate and spread itself across systems on its own. Similar to a virus, a worm must be installed by the target user and can also be spread with tactics like malicious email. Given a worm's ability to spread on its own, attackers sometimes target devices, drives, or files that have shared access over a network.

A well known example is the Blaster worm, also known as Lovesan, Lovsan, or MSBlast. In the early 2000s, this worm spread itself on computers running Windows XP and Windows 2000 operating systems. It would force devices into a continuous loop of shutting down and restarting. Although it did not damage the infected devices, it was able to spread itself to hundreds of thousands of users around the world. Many variants of the Blaster worm have been deployed since the original and can infect modern computers.

Note: Worms were very popular attacks in the mid 2000s but are less frequently used in recent years.

Trojan

A trojan, also called a Trojan horse, is malware that looks like a legitimate file or program. This characteristic relates to how trojans are spread. Similar to viruses, attackers deliver this type of malware hidden in file and application downloads. Attackers rely on tricking unsuspecting users into believing they’re downloading a harmless file, when they’re actually infecting their own device with malware that can be used to spy on them, grant access to other devices, and more.

Adware

Advertising-supported software, or adware, is a type of legitimate software that is sometimes used to display digital advertisements in applications. Software developers often use adware as a way to lower their production costs or to make their products free to the public—also known as freeware or shareware. In these instances, developers monetize their product through ad revenue rather than at the expense of their users.

Malicious adware falls into a sub-category of malware known as a potentially unwanted application (PUA). A PUA is a type of unwanted software that is bundled in with legitimate programs which might display ads, cause device slowdown, or install other software. Attackers sometimes hide this type of malware in freeware with insecure design to monetize ads for themselves instead of the developer. This works even when the user has declined to receive ads.

Spyware

Spyware is malware that's used to gather and sell information without consent. It's also considered a PUA. Spyware is commonly hidden in bundleware, additional software that is sometimes packaged with other applications. PUAs like spyware have become a serious challenge in the open-source software development ecosystem. That’s because developers tend to overlook how their software could be misused or abused by others.

Scareware

Another type of PUA is scareware. This type of malware employs tactics to frighten users into infecting their own device. Scareware tricks users by displaying fake warnings that appear to come from legitimate companies. Email and pop-ups are just a couple of ways scareware is spread. Both can be used to deliver phony warnings with false claims about the user's files or data being at risk.

Fileless malware

Fileless malware does not need to be installed by the user because it uses legitimate programs that are already installed to infect a computer. This type of infection resides in memory where the malware never touches the hard drive. This is unlike the other types of malware, which are stored within a file on disk. Instead, these stealthy infections get into the operating system or hide within trusted applications.

Pro tip: Fileless malware is detected by performing memory analysis, which requires experience with operating systems. 

Rootkits

A rootkit is malware that provides remote, administrative access to a computer. Most attackers use rootkits to open a backdoor to systems, allowing them to install other forms of malware or to conduct network security attacks.

This kind of malware is often spread by a combination of two components: a dropper and a loader. A dropper is a type of malware that comes packed with malicious code which is delivered and installed onto a target system. For example, a dropper is often disguised as a legitimate file, such as a document, an image, or an executable to deceive its target into opening, or dropping it, onto their device. If the user opens the dropper program, its malicious code is executed and it hides itself on the target system.

Multi-staged malware attacks, where multiple packets of malicious code are deployed, commonly use a variation called a loader. A loader is a type of malware that downloads strains of malicious code from an external source and installs them onto a target system. Attackers might use loaders for different purposes, such as to set up another type of malware---a botnet.

Botnet

A botnet, short for “robot network,” is a collection of computers infected by malware that are under the control of a single threat actor, known as the “bot-herder.” Viruses, worms, and trojans are often used to spread the initial infection and turn the devices into a bot for the bot-herder. The attacker then uses file sharing, email, or social media application protocols to create new bots and grow the botnet. When a target unknowingly opens the malicious file, the computer, or bot, reports the information back to the bot-herder, who can execute commands on the infected computer.

Ransomware

Ransomware describes a malicious attack where threat actors encrypt an organization's data and demand payment to restore access. According to the Cybersecurity and Infrastructure Security Agency (CISA), ransomware crimes are on the rise and becoming increasingly sophisticated. Ransomware infections can cause significant damage to an organization and its customers. An example is the WannaCry attack that encrypts a victim's computer until a ransom payment of cryptocurrency is paid.

Key takeaways

The variety of malware is astounding. The number of ways that it’s spread is even more staggering. Malware is a complex threat that can require its own specialization in cybersecurity. One place to learn more about malware analysis is INFOSEC's introductory course on malware analysis. Even without specializing in malware analysis, recognizing the types of malware and how they’re spread is an important part of defending against these attacks as a security analyst.

The rise of cryptojacking

Malware has been around nearly as long as computers.
In its earliest forms,
it was used by troublemakers
as a form of digital vandalism.
In today's digital world, alware has become a profitable crime that attackers use for their own financial gain. 
As a security professional, it's important that you remain aware of the latest evolutions.
Let's take a closer look at one way malware has evolved.
We'll then use this example to consider how malware can be spotted and how you can proactively protect against malware.
Ransomware is one of the types of malware attackers use to steal money. 
Another more recent type of malware is cryptojacking.
Cryptojacking is a form of malware that installs software to illegally mine cryptocurrencies.
You may be familiar with cryptocurrency from the news.
If you're new to the topic, cryptocurrencies are a form of digital money that have real-world value. Like physical forms of currency, there are many different types.
For the most part, they're referred to as coins or tokens.
In simple terms, crypto mining is a process used to obtain new coins.
Crypto mining is similar to the process for mining for other resources, like gold.
Mining for something like gold involves machinery, such as trucks and bulldozers, that can dig through the Earth.
Crypto coins, on the other hand, use computers instead. Rather than digging through the Earth, the computers run software that dig through billions of lines of encrypted code.
When enough code is processed, a crypto coin can be found.
Generally, more computers mining for coins mean more cryptocurrency can be discovered.
Criminals unfortunately figured this out.
Beginning in 2017, cryptojacking malware started being used to gain unauthorized control of personal computers to mine cryptocurrency. 
Since that time, cryptojacking techniques have become more sophisticated.
Criminals now regularly target vulnerable servers to spread their mining software.
Devices that communicate with the infected server become infected themselves.
The malicious code then runs in the background, mining for coins unknown to anyone. 
Cryptojacking software is hard to detect.
Luckily, security professionals have sophisticated tools that can help.
An intrusion detection system, or IDS, is an application that monitors system activity and alerts some possible intrusions.
When abnormal activity is detected like, malware mining for coins, the IDS alerts security personnel. 
Despite their usefulness, detection systems have a major drawback.
New forms of malware can remain undetected.
Fortunately, there are subtle signs that indicate a device is infected with cryptojacking software
or other forms of malware. 
By far the most telling sign of a cryptojacking infection is slowdown.
Other signs include increased CPU usage, sudden system crashes, and fast draining batteries.
Another sign is unusually high electricity costs related to the resource- intensive process of crypto mining. 
It's also good to know that there are certain measures you can take to reduce the likelihood of experiencing a malware attack like cryptojacking.
These defenses include things like using browser extensions designed to block malware, using ad blockers, disabling
JavaScript, and staying alert on the latest trends.
Security analysts can also educate others in their organizations on malware attacks.
While cryptojacking is still relatively new, attacks are becoming more common.
The type of malicious code cybercriminals spread is continually evolving.
It takes many years of experience to analyze new forms of malware.
Nevertheless, you're well on your way towards helping defend against these threats.

Cross-site scripting (XSS)

Previously, we explored a few types of malware.
Whether it's installed on an individual computer or a network server, all malicious software needs to be delivered to the target before it can work.
Phishing and other social engineering techniques are common ways for malware to be delivered.
Another way it's spread is using a broad class of threats known as web based exploits.

Web-based exploits are malicious code or behavior that's used to take advantage of coding flaws in a web application.
Cybercriminals target web-based exploits to obtain sensitive personal information.
Attacks occur because web applications interact with multiple users across multiple networks.
Malicious hackers commonly exploit this high level of interaction using injection attacks.

An injection attack is malicious code inserted into a vulnerable application.
The infected application often appears to work normally.
That's because the injected code runs in the background, unknown to the user.
Applications are vulnerable to injection attacks because they are programmed to receive data inputs.
This could be something the user types, clicks, or something one program is sharing with another.
When coded correctly, applications should be able to interpret and handle user inputs.

For example, let's say an application is expecting the user to enter a phone number.
This application should validate the input from the user to make sure the data is all numbers and not more than ten digits.
If the input from the user doesn't meet these requirements, the application should know how to handle it.
Web apps interact with multiple users across many platforms.
They also have a lot of interactive objects like images and buttons.
This makes it challenging for developers to think of all the ways they should sanitize their input.

A common and dangerous type of injection attack that's a threat to web apps is cross-site scripting.
Cross site scripting, or XSS, is an injection attack that inserts code into a vulnerable website or web application.
These attacks are often delivered by exploiting the two languages used by most websites, HTML and JavaScript.
Both can give cybercriminals access to everything that loads on the infected web page.
This can include session cookies, geolocation, and even webcams and microphones.

There are three main types of cross-site scripting attacks reflected, stored, and DOM-based.

A reflected XSS attack is an instance where a malicious script is sent to the server and activated during the server's response.
A common example of this is the search bar of a website. 
In a reflected XSS attack, criminals send their target a web link that appears to go to a trusted site.
When they click the link, it sends a HTTP request to the vulnerable site server.
The attacker script is then returned or reflected back to the innocent user's browser.
Here, the browser loads the malicious script because it trusts the server's response.
With the script loaded, information like session cookies are sent back to the attacker.

In a stored XSS attack, the malicious script isn't hidden in a link that needs to be sent to the server.
Instead a stored XSS attack is an instance when malicious script is injected directly on the server.
Here, attackers target elements of a site that are served to the user.
This could be things like images and buttons that load when the site is visited.
Infected elements activate the malicious code when a user simply visits the site.
Stored XSS attacks can be damaging because the user has no way of knowing the site is infected beforehand.

Finally there's DOM-based XSS. DOM stands for Document Object Model, which is basically the source code of a website.
A DOM-based XSS attack is an instance when malicious script exists in the web page a browser loads. Unlike reflected XSS,
these attacks don't need to be sent to the server to activate.

In a DOM-based attack, a malicious script can be seen in the URL.
In this example, the website's URL contains parameter values.
The parameter values reflect input from the user.
Here, the site allows users to select color themes.
When the user makes a selection, it appears as part of the URL.
In a DOM-based attack, criminals change the parameter that suspecting an input.
For example, they could hide malicious JavaScript in the HTML tags.
The browser would process the HTML and execute the JavaScript.

Hackers use these methods of cross-site scripting to steal sensitive information.
Security analysts should be familiar with this group of injection attacks.
However, they're not the only ones, as we'll discover next time. 




Exploitable gaps in databases

Let's keep exploring injection and
attacks by investigating another common type of web based exploit.
The next one we're going to discuss exploits the way websites access
information from databases.
Early in the program, you may have learned about SQL.
You may recall, SQL is a programming language used to create,
interact with, and request information from a database.

SQL is used by most web applications. For example, shopping websites use it a lot.
Imagine the databases of an online clothing store
It likely contains a full inventory of all the items the company sells.
Websites don't normally make users enter the SQL queries manually.
Instead, they use things like menus, images, and
buttons to show users information in a meaningful way.
For example, when an online shopper clicks a button to add a sweater to their cart,
it triggers a SQL query. The query runs in the background where no one can see it.

You'd never know from using the menus and buttons of a website, but
sometimes those back inquiries are vulnerable to injection attacks.

A SQL injection is an attack that executes unexpected queries on a database.
Like cross-site scripting, SQL injection occurs due to a lack of sanitized input.
The injections take place in the area of the website that are designed to accept user input. A common example is the login form to access a site.
One of these forms might trigger a backend SQL statement like this when a user enters their credentials.
Web forms, like this one, are designed to copy user input into the statement exactly as they're written.

The statement then sends a request to the server, which runs the query.
Websites that are vulnerable to SQL injection insert the user's input exactly as it's entered before running the code.

Unfortunately, this is a serious design flaw.
It commonly happens because web developers expect people to use these inputs correctly.
They don't anticipate attackers exploiting them. For example, an attacker might insert additional SQL code.
This could cause the server to run a harmful query of code that it wasn't expecting.
Malicious hackers can target these attack vectors to obtain sensitive information, modify tables and even gain administrative rights to the database.

The best way to defend against SQL injection is code that will sanitize the input.
Developers can write code to search for specific SQL characters.
This gives the server a clearer idea of what inputs to expect.
One way this is done is with prepared statements.

A prepared statement is a coding technique that executes SQL statements before passing them on to the database.
When the user's input is unknown, the best practice is to use these prepared statements.
With just a few extra lines of code, a prepared statement executes the code before passing it on to the server.
This means the code can be validated before performing the query.

Having well written code is one of the keys to preventing SQL injection.

Security teams work with program developers to test applications for these sort of vulnerabilities.
Like a lot of security tasks, it's a team effort.
Injection attacks are just one of many types of web-based exploits that security teams deal with.
We're going to explore how security teams prepare for injection attacks and other kinds of threats.

Prevent injection attacks; SQL injection categories

Previously, you learned that Structured Query Language (SQL) is a programming language used to create, interact with, and request information from a database. SQL is one of the most common programming languages used to interact with databases because it is widely supported by a range of database products.

As you might recall, malicious SQL injection is a type of attack that executes unexpected queries on a database. Threat actors perform SQL injections to modify, delete, or steal information from databases. A SQL injection is a common attack vector that is used to gain unauthorized access to web applications. Due to the language's popularity with developers, SQL injections are regularly listed in the OWASP® Top 10 because developers tend to focus on making their applications work correctly rather than protecting their products from injection.

In this reading, you'll learn about SQL queries and how they are used to request information from a database. You will also learn about the three classes of SQL injection attacks used to manipulate vulnerable queries. You will also learn ways to identify when websites are vulnerable and ways to address those gaps.

SQL queries

Every bit of information that’s accessed online is stored in a database. A database is an organized collection of information or data in one place. A database can include data such as an organization's employee directory or customer payment methods. In SQL, database information is organized in tables. SQL is commonly used for retrieving, inserting, updating, or deleting information in tables using queries.

A SQL query is a request for data from a database. For example, a SQL query can request data from an organization's employee directory such as employee IDs, names, and job titles. A human resources application can accept an input that queries a SQL table to filter the data and locate a specific person. SQL injections can occur anywhere within a vulnerable application that can accept a SQL query.

Queries are usually initiated in places where users can input information into an application or a website via an input field. Input fields include features that accept text input such as login forms, search bars, or comment submission boxes. A SQL injection occurs when an attacker exploits input fields that aren't programmed to filter out unwanted text. SQL injections can be used to manipulate databases, steal sensitive data, or even take control of vulnerable applications.

SQL injection categories

There are three main categories of SQL injection: 

In the following sections, you'll learn that each type describes how a SQL injection is initiated and how it returns the results of the attack.

In-band SQL injection

In-band, or classic, SQL injection is the most common type. An in-band injection is one that uses the same communication channel to launch the attack and gather the results.

For example, this might occur in the search box of a retailer's website that lets customers find products to buy. If the search box is vulnerable to injection, an attacker could enter a malicious query that would be executed in the database, causing it to return sensitive information like user passwords. The data that's returned is displayed back in the search box where the attack was initiated.

Out-of-band SQL injection

An out-of-band injection is one that uses a different communication channel  to launch the attack and gather the results.

For example, an attacker could use a malicious query to create a connection between a vulnerable website and a database they control. This separate channel would allow them to bypass any security controls that are in place on the website's server, allowing them to steal sensitive data

Note: Out-of-band injection attacks are very uncommon because they'll only work when certain features are enabled on the target server.

Inferential SQL injection

Inferential SQL injection occurs when an attacker is unable to directly see the results of their attack. Instead, they can interpret the results by analyzing the behavior of the system.

For example, an attacker might perform a SQL injection attack on the login form of a website that causes the system to respond with an error message. Although sensitive data is not returned, the attacker can figure out the database's structure based on the error. They can then use this information to craft attacks that will give them access to sensitive data or to take control of the system.

Injection Prevention

SQL queries are often programmed with the assumption that users will only input relevant information. For example, a login form that expects users to input their email address assumes the input will be formatted a certain way, such as jdoe@domain.com. Unfortunately, this isn’t always the case.

A key to preventing SQL injection attacks is to escape user inputs—preventing someone from inserting any code that a program isn't expecting.

There are several ways to escape user inputs:

Using a combination of these techniques can help prevent SQL injection attacks. In the security field, you might need to work closely with application developers to address vulnerabilities that can lead to SQL injections. OWASP's SQL injection detection techniques

is a useful resource if you're interested in investigating SQL injection vulnerabilities on your own.

Key takeaways

Many web applications retrieve data from databases using SQL, and injection attacks are quite common due to the popularity of the language. As is the case with other kinds of injection attacks, SQL injections are a result of unexpected user input. It's important to collaborate with app developers to help prevent these kinds of attacks by sharing your understanding of SQL injection techniques and the defenses that should be put in place.

A proactive approach to security

Preparing for attacks is an important job that the entire security team is responsible for.
Threat actors have many tools they can use depending on their target.
For example, attacking a small business can be different from attacking a public utility.
Each have different assets and specific defenses to keep them safe.
In all cases, anticipating attacks is the key to preparing for them.
In security, we do that by performing an activity known as threat modeling. 
Threat modeling is a process of identifying assets, their vulnerabilities, and how each is exposed to threats.
We apply threat modeling to everything we protect. Entire systems, applications, or business processes all get examined from this security-related perspective. 
Creating threat models is a lengthy and detailed activity.
They're normally performed by a collection of individuals with years of experience in the field.
Because of that, it's considered to be an advanced skill in security.
However, that doesn't mean you won't be involved. 
There are several threat modeling frameworks used in the field.
Some are better suited for network security. Others are better for things like information security, or application development. 
In general, there are six steps of a threat model.
The first is to define the scope of the model.
At this stage, the team determines what they're building by creating an inventory of assets and classifying them. 
The second step is to identify threats.
Here, the team defines all potential threat actors.
A threat actor is any person or group who presents a security risk.
Threat actors are characterized as being internal or external.
For example, an internal threat actor could be an employee who intentionally expose an asset to harm.
An example of an external threat actor could be a malicious hacker, or a competing business. 
After threat actors have been identified, the team puts together what's known as an attack tree.
An attack tree is a diagram that maps threats to assets.
The team tries to be as detailed as possible when constructing this diagram before moving on. 
Step three of the threat modeling process is to characterize the environment.
Here, the team applies an attacker mindset to the business.
They consider how the customers and employees interact with the environment.
Other factors they consider are external partners and third party vendors. 
At step four, their objective is to analyze threats.
Here, the team works together to examine existing protections and identify gaps.
They then rank threats according to their risk score that they assign. 
During step five, the team decides how to mitigate risk.
At this point, the group creates their plan for defending against threats.
The choices here are to avoid risk, transfer it, reduce it, or accept it. 
The sixth and final step is to evaluate findings.
At this stage, everything that was done during the exercise is documented, fixes are applied, and the team makes note of any successes they had.
They also record any lessons learned, so
they can inform how they approach future threat models. 
That's an overview of the general threat modeling process.
What we've explored was just one of many methods that exist.

Chantelle: The value of diversity in cybersecurity

My name is Chantelle.
I'm a Security Engineer here at Google, and I am part of
the security and implement, and scaling team.
We secure and monitor systems that contain sensitive information.
My background, initially I was going to be a heart surgeon and then I took chemistry, I took chem 1, and I was like,
no that's not happening.
My interest in cybersecurity came from a TV show called Mr.
Robot. It's about a vigilante hacker trying to save the world.
And from there, that kind of piqued my interest in security, and so that's a great foundation.
Valuing diversity in security is important because we're exposed to a broad range of thinking.
That helps to inspire a lot of creative ideas and different perspectives and different ways of tackling a problem and that kind of leads us forward into being better security engineers.
Our Manager, Laureen always steps in to tell us, "Don't be so quick to find a solution.
Don't be so quick to solve the problems yourselves."
We have a wide range of security engineers and connections to our disposal, and she encourages us to go out and seek them out,
and then to come back and then have us settle in and brainstorm all of these ideas that we've collected after we've went out and tried to find it.
We've ultimately almost always come up with the best possible outcome that we can ever come up with.
My advice for people to get into the industry is get out there and be proactive.
I definitely recommend joining up the security community on Twitter.
There's a huge security community on Twitter right now.
That shares a bunch of resources,
opportunities, job positions, and are definitely open to talking to anyone that's interested in getting into the field but just don't know how. I recommend security as a career.
Definitely, I think that for me personally, I was able to tap into my rebel side a lot in security.
I found I was able to express myself a bit more in security.
It's just a whole ball of goodness.

PASTA: The Process for Attack Simulation and Threat Analysis

Let's finish exploring threat modelling by taking a look at real-world scenarios.
This time, we'll use a standard threat modelling process called PASTA. 
Imagine that a fitness company is getting ready to launch their first mobile app.
Before we can go live, the company asks their security team to ensure the app will protect customer data.
The team decides to perform a threat model using the PASTA framework. 
Stage one of the PASTA threat model framework is to define business and security objectives.
Before starting the threat model, the team needs to decide what their goals are.
The main objective in our example with the fitness company app is protecting customer data.
The team starts by asking a lot of questions at this stage.
They'll need to understand things like how personally identifiable information is handled.
Answering these questions is a key to evaluate the impact of threats that they'll find along the way. 
Stage two of the PASTA framework is to define the technical scope.
Here, the team's focus is to identify the application components that must be evaluated.
This is what we discussed earlier as the attack surface.
For a mobile app, this will include technology that's involved while data is at rest and in use.
This includes network protocols, security controls, and other data interactions. 
At stage three of PASTA, the team's job is to decompose the application.
In other words, we need to identify the existing controls that will protect user data from threats.
This normally means working with the application developers to produce a data flow diagram.
A diagram like this will show how data gets from a user's device to the company's database.
It would also identify the controls in place to protect this data along the way. 
Stage four of PASTA is next.
The focus here is to perform a threat analysis.
This is where the team gets into their attacker mindset.
Here, research is done to collect the most up-to-date information on the type of attacks being used.
Like other technologies, mobile apps have many attack vectors.
These change regularly, so the team would reference resources to stay up-to-date. 
Stage five of PASTA is performing a vulnerability analysis.
In this stage, the team more deeply investigates
potential vulnerabilities by considering the root of the problem. 
Next is stage six of PASTA, where the team conducts attack modeling.
This is where the team tests the vulnerabilities that were analyzed in stage five by simulating attacks.
The team does this by creating an attack tree, which looks like a flow chart.
For example, an attack tree for our mobile app might look like this.
Customer information, like user names and passwords, is a target.
This data is normally stored in a database.
We've learned that databases are vulnerable to attacks like SQL injection.
So we will add this attack vector to our attack tree.
A threat actor might exploit vulnerabilities caused by unsanitized inputs to attack this vector.
The security team uses attack trees like this to identify attack vectors that need to be tested to validate threats.
This is just one branch of this attack tree.
An application, like a fitness app, typically has lots of branches with
a number of other attack vectors. 
Stage seven of PASTA is to analyze risk and impact.
Here, the team assembles all the information they've collected in stages one through six.
By this stage, the team is in position to make informed risk management recommendations to business stakeholders that align with their goals. 
And with that, we made it all the way through a threat modeling exercise based in the PASTA framework!

Traits of an effective threat model

Threat modeling is the process of identifying assets, their vulnerabilities, and how each is exposed to threats. It is a strategic approach that combines various security activities, such as vulnerability management, threat analysis, and incident response. Security teams commonly perform these exercises to ensure their systems are adequately protected. Another use of threat modeling is to proactively find ways of reducing risks to any system or business process.

Traditionally, threat modeling is associated with the field of application development. In this reading, you will learn about common threat modeling frameworks that are used to design software that can withstand attacks. You'll also learn about the growing need for application security and ways that you can participate.

Why application security matters

Applications have become an essential part of many organizations' success. For example, web-based applications allow customers from anywhere in the world to connect with businesses, their partners, and other customers.

Mobile applications have also changed the way people access the digital world. Smartphones are often the main way that data is exchanged between users and a business. The volume of data being processed by applications makes securing them a key to reducing risk for everyone who’s connected. 

For example, say an application uses Java-based logging libraries with the Log4Shell vulnerability (CVE-2021-44228). If it's not patched, this vulnerability can allow remote code execution that an attacker can use to gain full access to your system from anywhere in the world. If exploited, a critical vulnerability like this can impact millions of devices.

Defending the application layer

Defending the application layer requires proper testing to uncover weaknesses that can lead to risk. Threat modeling is one of the primary ways to ensure that an application meets security requirements. A DevSecOps team, which stands for development, security, and operations, usually performs these analyses.

A typical threat modeling process is performed in a cycle:

  • Define the scope

  • Identify threats

  • Characterize the environment

  • Analyze threats

  • Mitigate risks

  • Evaluate findings

Les six étapes de la modélisation des menaces présentées sous forme de cycle.

Ideally, threat modeling should be performed before, during, and after an application is developed. However, conducting a thorough software analysis takes time and resources. Everything from the application's architecture to its business purposes should be evaluated. As a result, a number of threat-modeling frameworks have been developed over the years to make the process smoother. 

Note: Threat modeling should be incorporated at every stage of the software development lifecycle, or SDLC.

Common frameworks

When performing threat modeling, there are multiple methods that can be used, such as:

  • STRIDE

  • PASTA

  • Trike

  • VAST

Organizations might use any one of these to gather intelligence and make decisions to improve their security posture. Ultimately, the “right” model depends on the situation and the types of risks an application might face.

STRIDE 

STRIDE is a threat-modeling framework developed by Microsoft. It’s commonly used to identify vulnerabilities in six specific attack vectors. The acronym represents each of these vectors: spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege.

PASTA

The Process of Attack Simulation and Threat Analysis (PASTA) is a risk-centric threat modeling process developed by two OWASP leaders and supported by a cybersecurity firm called VerSprite. Its main focus is to discover evidence of viable threats and represent this information as a model. PASTA's evidence-based design can be applied when threat modeling an application or the environment that supports that application. Its seven stage process consists of various activities that incorporate relevant security artifacts of the environment, like vulnerability assessment reports.

Trike 

Trike is an open source methodology and tool that takes a security-centric approach to threat modeling. It's commonly used to focus on security permissions, application use cases, privilege models, and other elements that support a secure environment.

VAST

The Visual, Agile, and Simple Threat (VAST) Modeling framework is part of an automated threat-modeling platform called ThreatModeler®. Many security teams opt to use VAST as a way of automating and streamlining their threat modeling assessments.

Participating in threat modeling

Threat modeling is often performed by experienced security professionals, but it’s almost never done alone. This is especially true when it comes to securing applications. Programs are complex systems responsible for handling a lot of data and processing  a variety of commands from users and other systems.

One of the keys to threat modeling is asking the right questions:

  • What are we working on?

  • What kinds of things can go wrong?

  • What are we doing about it?

  • Have we addressed everything?

  • Did we do a good job?

It takes time and practice to learn how to work with things like data flow diagrams and attack trees. However, anyone can learn to be an effective threat modeler. Regardless of your level of experience, participating in one of these exercises always starts with simply asking the right questions.

Key takeaways

Many people rely on software applications in their day to day lives. Securing the applications that people use has never been more important. Threat modeling is one of the main ways to determine whether security controls are in place to protect data privacy. Building the skills required to lead a threat modeling activity is a matter of practice. However, even a security analyst with little experience can be a valuable contributor to the process. It all starts with applying an attacker mindset and thinking critically about how data is handled.

Wrap-up; terms and definitions from course 5, week 4

Managing threats is a major part of what security professionals do.
In this part of the course, we've explored some common types of cyber threats that you'll likely encounter in the field. Let's review. 
We started off discussing social engineering.
You learned that attackers have a variety of ways to trick their targets into sharing private information.
Social engineering techniques rely on exploiting people's trust and willingness to help.
Phishing attacks are one of the most common ways that attackers go about manipulating their targets. 
Next, we explored malware.
Here, we discussed the major classes of malware, like viruses, trojans, and worms.
You learned how to spot signs of infection.
You also learned how malware has evolved and become more sophisticated over the years. 
After that, we turned our attention to web-based exploits, specifically injection attacks.
You learned about cross-site scripting and SQL injection, two of the most common types of attacks facing organizations online.
We discussed how each of these attacks are carried out.
You also learned about how web applications can be protected from malicious code.
Finally, we explored the threat modeling process.
You learned the process that security teams use to perform these exercises. 
Unfortunately, cyberattacks and security breaches are a reality that we're challenged with on a regular basis.
However, being aware of the type of threats that exist and the threat modeling process provides an important foundation for your work as a security analyst.

Glossary terms from week 4

Angler phishing: A technique where attackers impersonate customer service representatives on social media 

Advanced persistent threat (APT): Instances when a threat actor maintains unauthorized access to a system for an extended period of time 

Adware: A type of legitimate software that is sometimes used to display digital advertisements in applications

Attack tree: A diagram that maps threats to assets

Baiting: A social engineering tactic that tempts people into compromising their security

Botnet: A collection of computers infected by malware that are under the control of a single threat actor, known as the “bot-herder"

Cross-site scripting (XSS): An injection attack that inserts code into a vulnerable website or web application

Cryptojacking: A form of malware that installs software to illegally mine cryptocurrencies 

DOM-based XSS attack: An instance when malicious script exists in the webpage a browser loads

Dropper: A type of malware that comes packed with malicious code which is delivered and installed onto a target system

Fileless malware: Malware that does not need to be installed by the user because it uses legitimate programs that are already installed to infect a computer

Hacker: Any person or group who uses computers to gain unauthorized access to data

Identity and access management (IAM): A collection of processes and technologies that helps organizations manage digital identities in their environment 

Injection attack: Malicious code inserted into a vulnerable application

Input validation: Programming that validates inputs from users and other programs

Intrusion detection system (IDS): An application that monitors system activity and alerts on possible intrusions

Loader: A type of malware that downloads strains of malicious code from an external source and installs them onto a target system

Malware: Software designed to harm devices or networks

Process of Attack Simulation and Threat Analysis (PASTA): A popular threat modeling framework that’s used across many industries

Phishing: The use of digital communications to trick people into revealing sensitive data or deploying malicious software

Phishing kit: A collection of software tools needed to launch a phishing campaign

Prepared statement: A coding technique that executes SQL statements before passing them onto the database 

Potentially unwanted application (PUA): A type of unwanted software that is bundled in with legitimate programs which might display ads, cause device slowdown, or install other software

Quid pro quo: A type of baiting used to trick someone into believing that they’ll be rewarded in return for sharing access, information, or money

Ransomware: Type of malicious attack where attackers encrypt an organization’s data and demand payment to restore access

Reflected XSS attack: An instance when malicious script is sent to a server and activated during the server’s response 

Rootkit: Malware that provides remote, administrative access to a computer

Scareware: Malware that employs tactics to frighten users into infecting their device

Smishing: The use of text messages to trick users to obtain sensitive information or to impersonate a known source

Social engineering: A manipulation technique that exploits human error to gain private information, access, or valuables

Spear phishing: A malicious email attack targeting a specific user or group of users, appearing to originate from a trusted source

Spyware: Malware that’s used to gather and sell information without consent

SQL (Structured Query Language): A programming language used to create, interact with, and request information from a database

SQL injection: An attack that executes unexpected queries on a database

Stored XSS attack: An instance when malicious script is injected directly on the server

Tailgating: A social engineering tactic in which unauthorized people follow an authorized person into a restricted area

Threat: Any circumstance or event that can negatively impact assets

Threat actor: Any person or group who presents a security risk

Threat modeling: The process of identifying assets, their vulnerabilities, and how each is exposed to threats

Trojan horse: Malware that looks like a legitimate file or program

Vishing: The exploitation of electronic voice communication to obtain sensitive information or to impersonate a known source

Watering hole attack: A type of attack when a threat actor compromises a website frequently visited by a specific group of users

Whaling: A category of spear phishing attempts that are aimed at high-ranking executives in an organization

Web-based exploits: Malicious code or behavior that’s used to take advantage of coding flaws in a web application

Terms and definitions from Course 5

Cybersecurity Glossary


A


Access controls: Security controls that manage access, authorization, and
accountability of information
Adware: A type of legitimate software that is sometimes used to display digital
advertisements in applications
Advanced persistent threat (APT): An instance when a threat actor maintains
unauthorized access to a system for an extended period of time
Algorithm: A set of rules used to solve a problem
Angler phishing: A technique where attackers impersonate customer service
representatives on social media
Application programming interface (API) token: A small block of encrypted code
that contains information about a user
Asset: An item perceived as having value to an organization
Asset classification: The practice of labeling assets based on sensitivity and
importance to an organization
Asset inventory: A catalog of assets that need to be protected
Asset management: The process of tracking assets and the risks that affect them
Asymmetric encryption: The use of a public and private key pair for encryption and
decryption of data
Attack surface: The characteristics and features of the areas where an attack can
come from

Attack tree: A diagram that maps threats to assets
Attack vector: The pathways attackers use to penetrate security defenses

B


Baiting: A social engineering tactic that tempts people into compromising their
security
Basic auth: The technology used to establish a user’s request to access a server
Bit: The smallest unit of data measurement on a computer
Botnet: A collection of computers infected by malware that are under the control of a
single threat actor, known as the “bot-herder"
Brute force attack: The trial and error process of discovering private information
Bug bounty: Programs that encourage freelance hackers to find and report
vulnerabilities


C


Cipher: An algorithm that encrypts information
Common Vulnerabilities and Exposures (CVE®) list: An openly accessible dictionary
of known vulnerabilities and exposures
Common Vulnerability Scoring System (CVSS): A measurement system that scores
the severity of a vulnerability
Compliance: The process of adhering to internal standards and external regulations
Computer virus: see “virus”
Cross-site scripting (XSS): An injection attack that inserts code into a vulnerable
website or web application
Cryptojacking: A form of malware that installs software to illegally mine
cryptocurrencies

Cryptographic key: A mechanism that decrypts ciphertext
Cryptography: The process of transforming information into a form that unintended
readers can’t understand
CVE Numbering Authority (CNA): An organization that volunteers to analyze and
distribute information on eligible CVEs

D


Data: Information that is translated, processed, or stored by a computer
Data at rest: Data not currently being accessed
Data in transit: Data traveling from one point to another
Data in use: Data being accessed by one or more users
Data custodian: Anyone or anything that’s responsible for the safe handling,
transport, and storage of information
Data owner: The person that decides who can access, edit, use, or destroy their
information
Defense in depth: A layered approach to vulnerability management that reduces risk
Digital certificate: A file that verifies the identity of a public key holder
DOM-based XSS attack: An instance when malicious script exists in the webpage a
browser loads
Dropper: A type of malware that comes packed with malicious code which is delivered
and installed onto a target system


E


Encryption: The process of converting data from a readable format to an encoded
format
Exploit: A way of taking advantage of a vulnerability

Exposure: A mistake that can be exploited by a threat


F


Fileless malware: Malware that does not need to be installed by the user because it
uses legitimate programs that are already installed to infect a computer


H


Hacker: Any person who uses computers to gain access to computer systems,
networks, or data
Hash collision: An instance when different inputs produce the same hash value
Hash function: An algorithm that produces a code that can’t be decrypted
Hash table: A data structure that's used to store and reference hash values


I


Identity and access management (IAM): A collection of processes and technologies
that helps organizations manage digital identities in their environment
Information privacy: The protection of unauthorized access and distribution of data
Information security (InfoSec): The practice of keeping data in all states away from
unauthorized users
Injection attack: Malicious code inserted into a vulnerable application
Input validation: Programming that validates inputs from users and other programs
Intrusion detection system (IDS): An application that monitors system activity and
alerts on possible intrusions


L


Loader: A type of malware that downloads strains of malicious code from an external
source and installs them onto a target system

M


Malware: Software designed to harm devices or networks
MITRE: A collection of non-profit research and development centers
Multi-factor authentication (MFA): A technology that requires at least two distinct
forms of identification


N


National Institute of Standards and Technology (NIST) Cybersecurity Framework
(CSF): A voluntary framework that consists of standards, guidelines, and best
practices to manage cybersecurity risk
Non-repudiation: The concept that the authenticity of information can’t be denied


O


OAuth: An open-standard authorization protocol that shares designated access
between applications


P


Process of Attack Simulation and Threat Analysis (PASTA): A popular threat
modeling framework that’s used across many industries
Payment Card Industry Data Security Standards (PCI DSS): A set of security
standards formed by major organizations in the financial industry
Personally identifiable information (PII): Any information used to infer an individual's
identity
Phishing: The use of digital communications to trick people into revealing sensitive
data or deploying malicious software
Phishing kit: A collection of software tools needed to launch a phishing campaign
Policy: A set of rules that reduce risk and protect information

Potentially unwanted application (PUA): A type of unwanted software that is
bundled in with legitimate programs which might display ads, cause device slowdown,
or install other software
Prepared statement: A coding technique that executes SQL statements before
passing them onto the database
Principle of least privilege: The concept of granting only the minimal access and
authorization required to complete a task or function
Procedures: Step-by-step instructions to perform a specific security task
Protected health information (PHI): Information that relates to the past, present, or
future physical or mental health or condition of an individual
Public key infrastructure (PKI): An an encryption framework that secures the
exchange of online information


Q


Quid pro quo: A type of baiting used to trick someone into believing that they’ll be
rewarded in return for sharing access, information, or money


R


Rainbow table: A file of pre-generated hash values and their associated plaintext
Ransomware: Type of malicious attack where attackers encrypt an organization’s data
and demand payment to restore access
Regulations: Rules set by a government or other authority to control the way
something is done
Reflected XSS attack: An instance when malicious script is sent to a server and
activated during the server’s response
Risk: Anything that can impact confidentiality, integrity, or availability of an asset
Rootkit: Malware that provides remote, administrative access to a computer


S

 

Salting: An additional safeguard that’s used to strengthen hash functions
Scareware: Malware that employs tactics to frighten users into infecting their device
Security assessment: A check to determined how resilient current security
implementations against threats
Security audit: A review of an organization's security controls, policies, and
procedures against a set of expectations
Security controls: Safeguards designed to reduce specific security risks
Security hardening: The process of strengthening a system to reduce its vulnerability
and attack surface
Separation of duties: The principle that users should not be given levels of
authorization that would allow them to misuse a system
Session: A sequence of network HTTP basic auth requests and responses associated
with the same user
Session cookie: A token that websites use to validate a session and determine how
long that session should last
Session hijacking: An event when attackers obtain a legitimate user’s session ID
Session ID: A unique token that identifies a user and their device while accessing a
system
Single sign-on (SSO): A technology that combines several different logins into one
Smishing: The use of text messages to trick users to obtain sensitive information or to
impersonate a known source
Social engineering: A manipulation technique that exploits human error to gain
private information, access, or valuables
Spear phishing: A malicious email attack targeting a specific user or group of users,
appearing to originate from a trusted source
Spyware: Malware that’s used to gather and sell information without consent
SQL (Structured Query Language): A programming language used to create, interact
with, and request information from a database

SQL injection: An attack that executes unexpected queries on a database
Standards: References that inform how to set policies
Stored XSS attack: An instance when a malicious script is injected directly on the
server
SQL (Structured Query Language): A programming language used to create, interact
with, and request information from a database
Symmetric encryption: The use of a single secret key to exchange information


T


Tailgating: A social engineering tactic in which unauthorized people follow an
authorized person into a restricted area
Threat: Any circumstance or event that can negatively impact assets
Threat actor: Any person or group who presents a security risk
Threat modeling: The process of identifying assets, their vulnerabilities, and how each
is exposed to threats
Trojan horse: Malware that looks like a legitimate file or program


U


User provisioning: The process of creating and maintaining a user's digital identity


V


Virus: Malicious code written to interfere with computer operations and cause damage
to data and software
Vishing: The exploitation of electronic voice communication to obtain sensitive
information or to impersonate a known source
Vulnerability: A weakness that can be exploited by a threat

Vulnerability scanner: Software that automatically compares existing common
vulnerabilities and exposures against the technologies on the network
Vulnerability assessment: The internal review process of a company’s security
systems
Vulnerability management: The process of finding and patching vulnerabilities


W


Watering hole attack: A type of attack when a threat actor compromises a website
frequently visited by a specific group of users
Whaling: A category of spear phishing attempts that are aimed at high-ranking
executives in an organization
Web-based exploits: Malicious code or behavior that’s used to take advantage of
coding flaws in a web application
Worm: Malware that can duplicate and spread itself across systems on its own


Z


Zero-day: An exploit that was previously unknown

Course wrap-up

Congratulations on making it through the end of this course!
I can hardly believe our time together is over.
Before moving on in the certificate program, I'd like to reflect on all the amazing progress you've made. 
When we started, you were introduced to a wide range of assets organizations protect.
Our primary focus was information security, specifically digital information. 
Here, you learned how asset classification helps security teams focus their efforts and prioritize resources.
We explored digital assets in the three states of data.
We also learned how policies, standards, and procedures can mitigate organizational risk.
Our focus on the NIST cybersecurity framework introduced you to a commonly used framework for managing risk. 
Afterwards, you learned about fundamental security systems and controls. 
You got to explore technology, like encryption, that's used to protect data in all its states.
You also learned how technologies like public key infrastructure and digital certificates are used to maintain the confidentiality, integrity, and availability of information online.
And you also explored access controls that make up the authentication, authorization, and accounting framework. 
Next, we explored common vulnerabilities and systems.
During this part of the course, you got an inside look into how security teams position themselves ahead of attacks. 
We explored the defense in depth strategy that's applied to protect information as it's exchanged between parties online.
You also learned about the common vulnerability and exposures list, the vulnerability assessment process, and attack surfaces and attack vectors. 
We then explored the major threats to asset security, like social engineering, malware, and web-based exploits. 
Together, we discussed how these attacks are carried out and the way security teams prevent them from doing damage.
We then finished up by exploring the process of threat modeling. 
We covered so much! 
I really appreciate your effort throughout it all. 
When I first started my career in security, my goal was to learn, network, and embrace any opportunity.
I was able to attend security conferences, receive job tips, earned references, and volunteered to gain experience.
At that time, I would have never imagined that I'd be here teaching what I've learned to others. 
That just goes to show you, your security journey is only just beginning.
While our time together is over, we covered a lot of complex topics, many of which are areas of specialization in security.
With the foundation you've built here, you have a wide range of possibilities to continue growing in the field. 
I'm so glad to have played a part in this step along your journey into the world of security.
And I wish you all the best as you continue forward along your path.