Hi from Tokyo, Japan, and welcome to Jeremy's IT
Lab. This video is about the Internet Protocol Suite, often called TCP/IP, the family of
protocols that makes communication over the Internet and other networks possible. You've
probably heard of some of these protocols: IP, TCP, UDP, HTTP and so on. If you
haven't heard of them, no problem: we'll cover many of these
core protocols in this course. From this suite of protocols, we can build
what people usually call the TCP/IP model: a way of grouping protocols into layers based
on the job they do. In this video we’re going to build that model step by step and use it as
a simple map of “who does what” in a network. This is going to be one of the longest videos
in the course, but there actually isn’t a lot you have to memorize: we’re covering a big-picture
framework about how networks work. Later, when you learn about things like IP addressing, routing and
switching, TCP and UDP, and so on, you’ll be able to drop each new topic into this framework and see
where it fits and how it works with the others. There’s also a lot of history - and a bit
of argument - around different models, like the OSI model and various versions of the
TCP/IP model. In this lesson I’ll give you a practical way to think about layers that will
help you understand how networks actually operate. As you continue studying, you can add nuance
and explore the debates if you’re interested, but you don’t need to solve those issues today. So, relax, don’t worry about memorizing
every concept, and just focus on the big ideas and relationships between the
layers. We’ll revisit these concepts many times in later videos. Alright,
enough talking. let’s get started. Let’s begin with the importance
of protocols and standards. A protocol is a set of rules defining how data
should be communicated between devices over a network. Protocols are the languages that
computers use to communicate. Just as someone who only speaks Chinese can’t talk with someone
who only speaks English, computers that “speak” different protocols can’t exchange data. Since the
early days of computer networking, there have been several attempts to define the functions needed
for computers to communicate with each other. These protocols were often developed by a specific
vendor, such as IBM, to be used with their own products. With this proprietary approach, enabling
communications between different vendors’ products was difficult, if not impossible. For example,
if we have an Apple iMac and an IBM server, each using its own proprietary protocols,
the IBM server wouldn’t be able to understand a request from the iMac. I’m just using
modern devices here as a visual example, but this kind of problem was common in the
early days of networking. To solve this issue, today’s networks and the devices connected to
them use standard, vendor-neutral protocols and technologies. A standard is an agreed-upon
specification that describes how a protocol or technology should work. When devices follow
vendor-neutral standards, devices of all types can communicate with each other. For example, an Apple
MacBook can access a website hosted on a Linux web server. And a PC running Windows can send an
email that can be read on a smartphone running Android. As long as the devices follow the same
standards, they can work together on the network. How did TCP/IP come about? Let’s cover a bit of
history. Early work on the computer networks that would evolve into today’s Internet began in the
1960s, mainly in the US. The US Department of Defense’s ARPA, Advanced Research Projects Agency,
funded ARPANET. It came online in 1969 to connect mainframes at universities and labs around the
country. Here’s a map showing ARPANET in the 1970s, connecting locations mostly around the east
and west coasts of the country. ARPANET originally used a protocol called NCP, the Network Control
Program. There was no TCP/IP in the early days. In 1974, Vint Cerf and Bob Kahn, two of
the fathers of the modern Internet, began developing TCP, Transmission Control Program,
as an internetworking protocol. TCP went through a few revisions, and was later divided into two
protocols that are still used today: Transmission Control Protocol, TCP. Notice the name change from
“Program” to “Protocol”. And Internet Protocol, IP. Together, these two protocols form the
foundation of the protocol suite widely known as TCP/IP today. And the ARPANET fully switched to
use TCP/IP on January 1st, 1983. Over time, TCP/IP became dominant over other vendor-proprietary
solutions at the time because it was published as a set of open standards that any vendor could
implement, and it could run over many different types of networks. And here we are several
decades later, still learning about TCP/IP. That’s enough history. Now who actually defines
these standards? Most networking standards are created by independent standards organizations,
not by a single vendor. Engineers from many different companies participate in the process.
When it comes to networking, there are two organizations you should be aware of. The first
one is the IEEE, the Institute of Electrical and Electronics Engineers. I mentioned the IEEE in
the previous video when talking about Ethernet cables. The IEEE develops many of the technologies
we use on local area networks, such as Ethernet, which is defined in the 802.3 set of standards.
The IEEE also develops Wi-Fi, which is defined in the 802.11 standards. These standards include
physical specifications like Ethernet cable types, Wi-Fi radio frequency, and how to transmit signals
over the physical medium, whether a cable or a radio wave. But they also specify how to format
messages to send them to another device over the medium. Both are important topics that we’ll cover
in this course. The second standards group is the IETF, the Internet Engineering Task Force. The
IETF is an open community that defines many of the protocols used on the Internet. A few examples
are TCP, IP, UDP, HTTP, and DNS. If you don’t know these protocols yet, don’t worry: they’re all in
this course. The IETF publishes its standards in documents called RFCs, Requests for Comments, all
of which are freely available on the Internet. The IEEE and IETF create the standards, and then
vendors like Cisco implement them so that devices from different companies can work together. But
there are many different standards and protocols, each solving a different part of the overall
communication problem. To make sense of them, it helps to group their jobs into
layers, making a layered model. So let’s look at that layered model. Networks
do a lot of different jobs when they move data from one computer to another. Things like the
physical transmission of signals, local delivery of messages on a LAN, routing traffic between
networks, maintaining end-to-end conversations, and the applications themselves. A model lets us
group related jobs into layers. Each layer has a specific role, and each layer uses the services
of the layer below and provides services to the layer above. We’ll come back to this idea
later. Protocols live mostly at one layer. I say “mostly” because sometimes the lines are
blurred and it’s hard to categorize a protocol. Some examples of protocols that we’ll look at
later are IP, TCP, and HTTP. Together they form a “stack” of protocols that work as a team, often
called the network stack. Here’s an example of a protocol stack from RFC 791, which defines IP, the
Internet Protocol. It was published in 1981, but this is still the current standard we use today.
If you’re curious, do a Google search for RFC 791: as I said in the previous slide, RFCs are freely
available online, although they can be quite dense reading. You don’t need to read them for the
CCNA. We can divide this stack into a few layers like this. At the top, we have the application
layer, with protocols like Telnet, FTP, and TFTP. I won’t define these protocols now, but they’re
all covered in this course. Then below it, we have the Transport layer, with protocols like TCP
and UDP. Below that is the Internet layer. Here, “Internet” doesn’t just mean THE Internet, the
public Internet you’re probably using to watch this video. It refers to an internetwork,
multiple networks connected together. That’s the real job of this layer. This is where we
find IP, which has two versions in use today: IP version 4 and IP version 6, both important
CCNA topics. Finally, we have the Link layer, including local network protocols like
Ethernet and Wi-Fi. Each of these layers, and the protocols in them, plays a different
role in the network. This is the TCP/IP model, at least one version of it. The model is a
description, not a law. For example, different textbooks and courses use slightly different
models, some with 4 layers, some with 5 layers, and some with more. In this course I’ll use
a five-layer model that builds on this one. Before talking about networks specifically,
let’s use an analogy for this layered approach: sending a letter to a friend via the post. I
don’t use analogies like this often because they can cause misunderstandings if you take them
too seriously, but I think this one is helpful. On the left, you’re in your house. And on the
right, Bob and his wife are in their house, with a couple of post offices between, connected
by roads. You write a letter, which is addressed to Bob. You then put the letter in an envelope
addressed to Bob’s house. Bob’s house is a bit far from where you live, so instead of driving
directly to his house, you deliver the envelope in your car to post office A, like this. Notice
the three different destinations. Your car is going to post office A. The envelope inside your
car is going to Bob’s house. And the letter inside the envelope is going to Bob himself. Post office
A then moves the envelope to a truck and delivers it to post office B, like this. The truck’s
destination is post office B, but the envelope and letter inside keep their original destinations.
Then, post office B moves the envelope to a new truck and delivers it to Bob’s house, like this.
Once again, this truck has a different destination from the previous truck: Bob’s house. The envelope
and letter inside still have their original destinations. The envelope is now delivered to
Bob’s house, opened, and the letter addressed to Bob is read by him. So in this simple mail system,
different parts care about different things: the message itself, who should read it, which house
it goes to, and how it moves along the route. We can turn those roles into a layered model,
and then compare that model to how TCP/IP works. So, let’s build that model. Don’t worry about
remembering this model or the names of the layers: it’s just an analogy for now. At the top we
have a content layer, the text of the letter: what you actually want to say. I’ll call the next
layer the recipient layer: the “To: Bob” part, indicating who inside the house should read the
letter. Below that is the address layer, which is the destination address of the house where
that person lives. Next is the local delivery layer, which handles moving the envelope to the
next stop on the path using cars and trucks: from your house to post office A, then post
office B, then Bob’s house. With these four roles we already have a simple 4-layer model,
similar to the TCP/IP stack I showed before. But we can add one more layer at the
bottom. I’ll call it the infrastructure layer: the roads and other paths that
the vehicles travel on. But really, these bottom two layers are related and always
work together. If traveling over ground, local delivery uses a car or truck. If
traveling in the air, local delivery uses an airplane. If traveling by water, local
delivery uses a ship. And often the letter will travel over multiple kinds of paths in its
journey: ground, then air, then ground again, for example. You can either think of these
bottom two layers as one combined delivery layer, or split them into two layers. And that’s why some
networking models use 4 layers and some use 5. One of the benefits of a layered system like
this is how the layers work together, but remain separate. Each layer has its own job. They
work together to deliver the message, but each one focuses on its own task. The Content layer
focuses on the actual contents of the letter, and that doesn’t change throughout the journey.
The Recipient layer focuses on the individual person who should receive the letter, and also
remains the same from start to finish. Likewise, the Address layer focuses on the address of the
house or building to which the letter should be delivered, and that stays the same as well. The
Local Delivery layer focuses on getting the letter to the next stop in the route: post office A,
then post office B, and finally Bob’s house. The Infrastructure layer is the system of roads and
mail trucks that the delivery process relies on. What happens inside one layer doesn’t change the
job of the other layers. For example, changing the content of the letter doesn’t change the delivery
steps. And changing the delivery path doesn’t affect the letter itself. When you’re writing the
letter to Bob, you don’t have to think about how the postal service will go about delivering it. Of
course, that doesn’t mean the layers are entirely independent. For example, if you address the
letter to somewhere overseas, that will influence how it’s delivered. But that’s not your concern
as you write the letter. This separation of layers is true in our mail system model example, and
it’s true of networks as well. Now let’s replace letters with data and roads with networks, and
build the 5-layer TCP/IP model we’ll actually use. Here are the five layers of our mail system.
Let’s map them to the TCP/IP model. The Content layer is equivalent to the TCP/IP Application
layer. The Recipient layer maps to the Transport layer. The Address layer matches the Internet
layer. These are the same names I showed before for the TCP/IP model. But I’m going to use
a different name next. The Local Delivery layer matches what I’ll call the Local Network
layer. This is the layer used for delivering messages within a LAN, a local area network.
There are a few different names for this layer, but I’ll go with “Local Network”. And finally
the Infrastructure layer is equivalent to the Physical layer. At this point, these
names probably don’t mean much to you, so next we’ll see how they actually work in
a real network between a client and a server. Here is a simple network I’ll use to demonstrate.
You’ve seen these icons before. On the left, we have PC1, a client that will send a request
to SRV1, the server on the right. Between them are two routers, R1 and R2, and two switches: SW1
between PC1 and R1, and SW2 between R2 and SRV1. PC1 is running a web client application; in
other words, a web browser such as Chrome. SRV1 is running a couple of processes: a
web server and a file server. PC1’s user wants to access a web page hosted on SRV1, so
the web client on PC1, let’s say it’s Chrome, needs to send a request to the web server
on SRV1. That’s the role of the application layer: it includes protocols for communication
between application processes and is responsible for creating and interpreting the data. But
there’s a problem: there are multiple processes running on SRV1: a web server process and a
file server process, and possibly more. So, how can PC1 ensure that its request reaches
the correct process on SRV1? That’s the role of the Transport layer. Each process on SRV1
has an associated transport “port number”: 80 for the web server and 21 for the file server.
Don’t worry about the exact numbers for now; that’s for later in the course. The important
point is that, to send the message to the web server, PC1 addresses it to port 80. Don’t
be confused by the name “port”; these aren’t physical ports like we looked at in the previous
lecture. Here, a port is just a number used to identify a process running on a host. The role
of the Transport layer is to provide end-to-end communication between application processes, such
as the web browser on PC1 and the web server on SRV1, using port numbers. But we still have a
problem: even if the message is addressed to the correct port on SRV1, we still have to make
sure the message reaches SRV1 in the first place. SRV1 has an IP address for that purpose. For
example, let’s say its IP address is 10.1.1.1. Again, don’t worry about the format of the address
itself yet; that’s a topic for another video. By addressing its message to SRV1’s IP address,
PC1 tells the routers in the path which host the message should be delivered to. That’s the
role of the Internet layer: it provides end-to-end communication between hosts, from the source host
to the destination host, using IP addresses and routers. When you think of the Internet layer,
think “IP addresses and routers.” And “host”, by the way, just means a device connected to
the network that can send and receive data, such as PC1 and SRV1. So, we’re getting close
to achieving our goal of getting PC1’s request to SRV1, but we’re not quite there yet. There are
several devices between PC1 and SRV1, and we need to make sure the message is properly passed along
between them. That’s the role of the Local Network layer. Using protocols at this layer, such as
Ethernet, each device sends the message to the next device on the local network: PC1 to R1 via
SW1,R1 forwards the message to R2. And finally, R2 forwards the message to SRV1 via SW2. The Local
Network layer provides hop-to-hop delivery within a local network using MAC addresses and switches.
I’ll explain the term “hop-to-hop” more in a bit, and MAC addresses are a topic for another video.
Looking at this diagram, you might wonder about the role of the switches; they connect devices in
a LAN and pass messages between them. We’ll look at how that works in detail in future
videos. Okay, and last but not least, we have the Physical layer: all of these cables
connecting the devices, and the transceivers on the devices that transmit and receive signals. The
Physical layer sends bits as electrical, optical, or radio signals over the physical medium.
Electrical signals over copper UTP cables, optical signals over fiber-optic cables, and
radio signals over wireless Wi-Fi connections. We’ve covered the role of each layer.
But now let’s go from the bottom up, from the Physical layer to the Application layer,
and look at a few more details. The Physical layer, also called “Layer 1”, is responsible
for sending and receiving bits as electrical, optical, or radio signals over the medium.
It defines things like cables, connectors, signal levels, and link speeds: all of the
physical aspects of communication. Examples include copper UTP and fiber-optic cables, Wi-Fi
radios and antennas, and network interface cards, also called NICs. Here are a few examples: UTP
ports and a cable on the left, fiber-optic in the middle, and an image of a NIC on the right. Each
network interface has a NIC like this inside of the device. The physical aspects of transmitting
data are very complex, involving a lot of electrical or optical engineering. Fortunately,
network engineers typically don’t have to know the low-level details of how it all works. So,
that’s the Physical layer: Layer 1 of our TCP/IP model. In some models this layer is combined
with the next layer, but I think it’s useful to differentiate between the physical concerns
like these and the logical aspects of Layer 2. Next, we have the Local Network layer, Layer
2. This layer provides hop-to-hop delivery of messages on a local network. So, what exactly
is a “hop”? A hop is one step along the path between two devices: from one router or host,
to the next router or host in the path. When PC1 sends a message to SRV1, how many hops are there?
The first hop is from PC1 to R1. The second hop is from R1 to R2. And the third hop is from R2 to
SRV1, so three hops in total. Switches don’t count as hops: a switch just extends the local network,
allowing multiple devices to connect. To keep the diagram simple I only show one host connected to
each of the switches, but there could be many more connected, like this. The switch allows them all
to connect to the same local area network. We’ll look at switches and LANs in detail in later
videos, so don’t worry about them for now. Layer 2 uses MAC, Media Access Control, addresses
to identify interfaces. Each device connected to a LAN has a unique MAC address for that
specific interface. Since R1 and R2 have multiple interfaces connected to the network,
let’s add some simple interface labels. G1, for GigabitEthernet1, and G2, for GigabitEthernet2,
meaning these interfaces operate at a speed of 1 gigabit per second. PC1 sends the message
to the MAC address of R1’s G1 interface, its NIC. That��s the interface that will receive
PC1’s message. R1 sends the message to the MAC address of R2’s G1 interface. And R2 sends
the message to the MAC address of SRV1’s interface. Once again, we’ll cover MAC addresses
in another video when we learn about switches. The key protocols at this layer that you should
know for the CCNA are Ethernet and Wi-Fi. Others exist, of course, but these are by far the
most commonly used Layer-2 protocols today. Next up we have the Internet layer, which
is Layer 3 of our model. This layer provides end-to-end delivery between hosts across multiple
networks. We call it “end-to-end” because it focuses on getting the message from the source
host all the way to the final destination host, instead of worrying about each individual
hop in the middle. Remember, “Internet” means “internetwork”, between networks. It uses
IP addresses to identify hosts in the network, kind of like a home address. In our example, SRV1
has the IP address 10.1.1.1. So when PC1 sends a message to SRV1, it addresses the message to
SRV1’s IP address. Routers operate mainly at this layer, using the message’s destination address to
forward the message toward its final destination host. There are a few protocols at this layer
that you’ll learn about for the CCNA: IP, both version 4 and version 6, and ICMP,
the Internet Control Message Protocol. Next is Layer 4, the Transport layer. This
layer provides end-to-end communication between application processes. And why is that needed?
Well, in our example, our server, SRV1, provides multiple services. It’s running a web server and
a file server, and perhaps other services. If SRV1 receives a message, it needs a way to know which
of these services should receive the message. This can also be called “process-to-process”
or “service-to-service” communication. This layer uses port numbers to identify the processes
on each host, such as port 80 for the web server and port 21 for the file server. So, when the web
client on PC1 wants to send a request to the web server running on SRV1, it addresses the message
to port 80. And if PC1 wanted to access the file server instead, it would address its messages
to port 21. The Transport layer allows hosts to differentiate between these different streams
of data. Layer 4 runs mainly on the communicating hosts, PC1 and SRV1. Routers normally operate
based on IP, Layer 3, not on Transport-layer information. There are exceptions of course,
but that’s a topic for later in the course. Layer 4 is primarily a conversation between the
two communicating hosts. There are a few protocols used at this layer. The two most prominent are
UDP, User Datagram Protocol, and TCP, Transmission Control Protocol. They both offer some different
features, and we’ll cover them in this course. Finally, we have layer 5: the Application
layer. This is where network communications meet applications. Small side note:
this layer is usually called “Layer 7”. I’ll explain why later when we talk about
another model, the OSI model. The Application layer defines how application processes
format, send, and interpret data. So, when Chrome on PC1 sends a request
to the web server process on SRV1, it uses an Application-layer protocol such
as HTTP to format and send the message, and that same protocol tells the web server how
to interpret the message it receives. Protocols at this layer define message formats and rules
for specific tasks. Here are a few examples. For browsing web pages, we use HTTP, or more often
its secure version, HTTPS. For file transfers, we can use FTP, TFTP, or another similar
protocol. And email uses its own protocols, too. Don’t worry about the specific protocols or their
names at this point, right now we’re just talking big concepts. Network infrastructure devices
like routers and switches typically don’t care about Application-layer details. This is too
high-level for them. They just move messages across the network. Only the communicating
hosts, PC1 and SRV1 in this case, actually look at and interpret the Application-layer
data. Now that we’ve seen what each layer does, how does a single message actually include all of
this information at once? Let’s see how the layers combine into a stack and how data moves through
them using encapsulation and decapsulation. To show how encapsulation and decapsulation
work, let’s simplify the network, making it a direct connection between PC1 and SRV1. First,
the Application layer prepares the data to be sent over the network, for example an HTTP request
that Chrome on PC1 sends to a web server running on SRV1. As the message moves down the stack, each
layer encapsulates the data with a header that contains the information needed by that layer.
For example, the source and destination addresses: Layer 4 port numbers, layer 3 IP addresses, and
Layer 2 MAC addresses, among other information. So first, the Transport layer encapsulates
the data with its header, the Layer 4 header, with source and destination port numbers
and other information. Then the Internet layer adds its header with source and destination
IP addresses. Then, Layer 2 encapsulates the data with both a header and a trailer. The trailer
is used by the receiving device to check for transmission errors. We’ll look at this in
more detail in a later video about Ethernet. Finally, the Physical layer transmits the
bits as signals over the physical medium, which is an Ethernet cable in this case. I’m
showing the message in order from left to right here. So the Layer 2 header is transmitted
first, and Layer 2 trailer is transmitted last. So that was the encapsulation process, now let’s
look at decapsulation. The receiving device receives the message as a stream of bits at layer
1. Layer 1 simply passes those bits up to the next layer. Now it’s up to the other layers to
interpret those bits. The device examines the information in the Layer 2 header and trailer, and
then removes them. This is called “decapsulation”, or sometimes “de-encapsulation”. The decapsulation
process continues up the stack. So, Layer 3 examines and removes the Layer 3 header. Then
Layer 4 examines and removes the Layer 4 header, and the data is delivered to the Application
layer. Now the application processes the data and, if needed, generates a response that goes back
down the stack, and is then transmitted back to PC1. So, that’s the decapsulation process, which
is basically the encapsulation process in reverse. Each device has its own network stack.
A message from PC1 goes down its stack, crosses the wire, and then goes up SRV1’s
stack where the Application layer processes it. The response does the same thing in the
opposite direction. It goes down SRV1’s stack, is sent over the wire to PC1, and then goes
up PC1’s stack where the data is interpreted by the application. In a real network there are
switches, routers, and other devices between the hosts. That adds more steps to the delivery
process, just like the mail system example, where the letter is moved between cars and trucks.
But the overall flow is still the same: each host sends the message down its stack, across the
network, and up the other host’s stack. We’ll look at how those intermediate devices - the routers
and switches - handle messages in later videos. Now that we’ve seen how encapsulation and
decapsulation work, let’s look at what to call these messages. At each stage in the encapsulation
and decapsulation process, there is a name given to the message. The combination of data and a
Layer 4 header is called a segment when using TCP, or a datagram when using UDP. Remember that point:
TCP creates segments, and UDP creates datagrams. This difference has to do with how TCP and UDP
treat the data, but we don’t need those details yet. The combination of a segment or datagram and
a Layer 3 header is called a packet. “Packet” is the most common term we use when talking about
messages being sent over a network, but strictly speaking it refers to the message at this stage in
the process. Finally, the combination of a packet and a Layer 2 header and trailer is called
a frame. This is what is actually sent over the wire. You’ll never see a packet, segment,
or datagram travelling over the wire itself; they are always sent inside a frame. We can also
use alternative names to describe the message at each stage, using the term protocol data unit,
or PDU. A segment or datagram is a Layer 4 PDU, or L4PDU. A packet is a Layer 3 PDU, or L3PDU. And
a frame is a Layer 2 PDU, or L2PDU. Both names are common, so I recommend knowing both. And there’s
one more important term to learn. The contents of each PDU, that is everything encapsulated by
the layer’s header and trailer, are called the payload. So, at Layer 4 a segment or datagram’s
payload is the application data itself. At Layer 3, a packet’s payload is a segment or datagram,
including the Layer 4 header and the data. And at Layer 2, a frame’s payload is a packet, including
the Layer 3 and Layer 4 headers and the data. Just remember that the payload is what’s inside the
PDU, not including that layer’s header or trailer. Finally, let’s talk about how these layers
interact with each other. I’ve replaced the generic layer names with examples of common
protocols. Also note the distinction between Layer 2 Ethernet, which defines MAC addressing
and frames, and Ethernet at the physical layer, which defines the signaling and physical
media. Each layer in the stack provides a service to the layer above it, and is serviced by
the layer below it. This is called adjacent-layer interaction. For example, Layer 4 provides a
service to Layer 5 by delivering data to the correct application using port numbers. Layer
3 provides a service to Layer 4 by delivering segments and datagrams to the correct destination
host using IP addresses. Layer 2 provides a service to Layer 3 by delivering packets to
the next hop using MAC addresses. And Layer 1 provides a service to Layer 2 by sending and
receiving the frame’s bits as electrical, optical, or radio signals over the physical medium. Each
layer relies on the layer below it to do its job. There’s also a related concept. Each layer
communicates with the same layer on other devices, and this is called same-layer interaction.
For example, the Application layer on one host sends data to the Application layer on the other
host. A segment or datagram is addressed to the Layer 4 port number of the correct application
on the destination host. A packet is addressed to the Layer 3 IP address of the destination
host. A frame is addressed to the Layer 2 MAC address of the next hop. And signals sent
out of a physical port are received by a physical port on the connected device.
This layered cooperation - within each device and between devices - is what
makes network communication possible. Just like in our mail system example, the
separation of layers is key to how networks work. Each layer has its own job and provides a
specific service to the layers above. Because the layers are modular, we can swap protocols at one
layer without changing the others. For example, instead of a web page exchange using HTTP and
TCP, maybe it’s a file exchange using TFTP, Trivial File Transfer Protocol, over UDP.
The lower layers in the stack can use IP and Ethernet without worrying about
the details of what the upper layers are doing. And if instead of a wired Ethernet
connection it’s a wireless Wi-Fi connection, Layers 1 and 2 can change to Wi-Fi without
affecting the upper layers. As long as each layer keeps its “contract” with the layers above
and below, we can improve or replace protocols at different layers without redesigning everything.
That flexibility is one of the main benefits of a layered model. Now that we’ve built
this 5-layer TCP/IP model, let’s see how it compares to other models you’ll see in
other courses and books, like the OSI model. You might have heard of the OSI model
before. If you’ve studied IT before, you almost certainly have. As I said at the
beginning of this video, TCP/IP development started in the 1970s, with work on ARPANET and
the early TCP/IP specifications. But in the late 1970s and 1980s, the International Organization
for Standardization, the ISO, designed a 7-layer model called OSI, Open Systems Interconnection,
and a matching protocol suite. The goal was to create international, vendor-neutral networking
standards that could unify existing proprietary stacks and potentially replace TCP/IP as the
vendor-neutral choice. Here is that model, with seven layers from top to bottom: Application,
Presentation, Session, Transport, Network, Data Link, and Physical. Governments, including the
US, promoted OSI as the preferred or recommended stack for new deployments. And for a while many
people thought that OSI was the future. But OSI protocols ended up being developed too late and
were too complex, and they never gained the same level of deployment as TCP/IP. I wasn’t around
at the time, but my understanding is that this was largely due to the bureaucratic nature of
the approach to developing OSI and its protocol stack. The process was very top-down: committees
designed the protocols in detail, and vendors were expected to implement them exactly as specified.
TCP/IP used a more bottom-up approach. In the end, TCP/IP won in the real world, although some
OSI technologies are still used. Today, almost all networks use the TCP/IP protocol stack,
but the 7-layer OSI model survives as a reference and teaching model, and as a common way to talk
about layers. Actually, most networking resources use a 5-layer model like the one I covered in this
video, but with names borrowed from the OSI model. Here is that 5-layer model. I foreshadowed this
earlier in the video, but this is why the TCP/IP Application layer is often called Layer 7: because
the Application layer is Layer 7 of the OSI model. I’ve also highlighted the two layer names that
are different from the model I introduced: Layer 3 is called the Network layer, and
Layer 2 is called the Data Link layer. And to make things even more confusing, here’s
a selection from Wikipedia of different models proposed by various authors. Here we have the
four-layer TCP/IP model I showed at the beginning of this video. Here is the seven-layer OSI model.
And the adapted five-layer model I just showed in the previous slide. When you see all of these
variations, you might wonder which model and which names you should actually learn. Of course,
I recommend the five-layer model I presented in this video, because I think its names best match
the job of each layer. But this five-layer model shown here is probably the most common in books
and other resources. The only differences from my model are that Layer 3 is called the Network
layer and Layer 2 is called the Data Link layer. In the end, it’s not that important:
in practice people usually just refer to “Layer 2” and “Layer 3”, and you won’t be
quizzed about layer names on the CCNA exam. Before wrapping up, let’s review the key
concepts from this video. I won’t explain everything again here; I’ll just point out what
you should focus on as you review. The first is, of course, the TCP/IP model itself. You should
know the general purpose of each of these layers. I also recommend knowing these more
commonly used names borrowed from the OSI model. It’s also important to understand the
encapsulation and decapsulation processes. In encapsulation, the sending
host adds headers, and a trailer, to the data to prepare it for transmission
over the physical medium. In decapsulation, the receiving host removes the headers and trailer
layer by layer until it gets to the data inside. You should also know the names of the different
PDUs, protocol data units. The Layer 4 PDU is a segment when using TCP and a datagram when using
UDP. The Layer 3 PDU is called a packet. And the Layer 2 PDU is called a frame; the frame is what
is actually transmitted over the physical medium. And finally, you should understand adjacent-layer
interaction and same-layer interaction. These concepts describe how the different layers
work together within each host and with their counterparts on other devices to achieve
communication between applications over a network. That’s all for this video on the TCP/IP model. By
now you should have a mental picture of the five layers, what each one is responsible for, and
how they work together using encapsulation and decapsulation. You don’t need to remember every
example or every historical detail; the important thing is that you know the layers, their general
roles, and the idea of PDUs and layer interaction. We’ll keep coming back to this model throughout
the course. You’ll hear the terms “Layer 1”, “Layer 2”, “Layer 3”, and so on many times.
As we dive into topics like Ethernet, IP addressing, TCP, and routing, you’ll be
able to place each new concept at the right layer and see how it fits into
the bigger picture. Just remember, this is a model, not a law. Real protocols
don’t always fit neatly into a single layer, and that’s okay. The model is just a tool to help
you think about what’s happening in the network. If you’d like a structured companion to
these videos, I’ve also written two books: Acing the CCNA Exam Volume 1 and Volume 2. They
go into more detail than I can fit in the videos, and include hundreds of diagrams and practice
questions. I honestly think they’re my best work when it comes to teaching the CCNA. If
you’re interested, you can find them on Amazon, or use the links in the description to get them
directly from the publisher. Thanks for watching. Before we wrap up, I want to say a quick thank
you to the channel members whose names you see on screen. Your monthly support helps me keep
making these videos and keep this content free for everyone on YouTube. I really appreciate
it: thank you for your support. If the video was helpful and you want to support me, click
the join button under the video. Thanks again.
Get free YouTube transcripts with timestamps, translation, and download options.
Transcript content is sourced from YouTube's auto-generated captions or AI transcription. All video content belongs to the original creators. Terms of Service · DMCA Contact
Browse transcripts generated by our community



















