Anki, My New Love

This post was also featured on

ankiUntil now, I was never one to use flashcards. I could not see their value, and I was too lazy to actually write things down on a paper flashcard (and my handwriting is horrible).

I recently discovered a program called Anki. On the surface, it is just a flash card program, but underneath, it can be as simple or as complex as you desire. The first couple of days that I used Anki, I was still in this mindset that flashcards are not for me, and they hold no value with how I am used to learning.


What makes Anki so great (in addition to being free for every platform except Apple iOS), is the way it works. Active recall and spaced repetitions are what make it such a powerful program. As mentioned in the link, active recall is the process of answering an asked question, as opposed to just passively studying (such as reading or watching training videos). Spaced repetition is the action of spreading out the review of material in gradually longer increments, with the idea being that you’ll remember things for a longer period of time by doing this.

Anki is based an another program called SuperMemo. I had first heard of SuperMemo a few years ago after reading this blog post by Petr Lapukhov (who is one of many people that I consider a Rock Star in the world of computer networking). A lot of research went into the development of SuperMemo (and consequently, Anki), and Anki attempts to solve some of the perceived shortcomings of SuperMemo.

After using it for nearly two weeks, I am already experiencing the benefit of learning using this method. I am retaining details from the flashcards I created that I know I would have forgotten (because I’ve learned and forgotten these things in the past, probably more than once!).

The flashcards are arranged into decks, and decks can contain other decks. The cards themselves can contain pretty much any content you can think of, including audio and video. Cards can also contain tags, which I’ve found to be extremely useful.

For example, even though I am studying the overall topic of “CCIE Routing & Switching”, I have multiple sub-decks, with each deck representing a source of information (such as a particular book, or a particular video series). Yet I can relate the different decks together with the use of tags. For example, I could study on the EIGRP tag across all the sub-decks.

One of the most useful things I have learned about creating flashcards is to not put too much on a single card. I found it better to break things up as much as possible. This helps with faster recall, and since you’re not actually using paper, it doesn’t matter how many cards you create.

For the first couple of days, I had a few cards that contained too much information, and I kept getting the answers wrong. After I broke each complicated card into multiple simpler cards, I was able to retain the information better with each successive pass.

What lead me to create more complicated cards at first was knowing that, for example, if I’m studying for the CCIE, it’s an advanced test with expert-level questions. I thought I would be doing a disservice to myself by making the flashcards too easy. Luckily, I quickly realized that this is the wrong approach. The reason for using the flashcards is to retain little pieces of information, whose aggregate can then be applied to something more complex.

When making easier cards, I try to contain only a single piece of information in the answer whenever possible. When it’s not possible, I try to formulate the question so that it indicates the number of components in the answer. I also modified the default flashcard format to display the associated tags I have given the flashcard, which can act as a hint if the question seems too ambiguous.

The style of flashcard will depend on what you’re trying to learn. For example, if you’re learning a foreign language, you may place the foreign word on the front, and the native word on the back (or vice versa). For me, I found taking simple facts and re-phrasing them as simple questions to be the most effective. I find the question “What IP protocol does EIGRP use?” more engaging than simply “EIGRP IP Protocol” or something similar. IP Protocol 88 is the answer, by the way.

At first, I was worried about the questions being too easy. This is a simple question, and duh, the answer is obvious! But, the answer is always obvious as you are writing the question. A few days or a week later, the answer may not be so obvious. This was what I discovered after using the program for about two weeks. I remember writing the question, and I remember the answer being something very easy…but I couldn’t remember what the answer was.

Enter spaced repetitions.

After you have created the flashcards, reviewing is just like a real flashcard; you look at the front, and recall what is on the back. What makes Anki work so well is that upon revealing the back of the card, you have to decide how difficult or easy recalling the answer was. This is where you need to be truly honest with yourself to get the most out of the software.

Depending on what you click (Again, Hard, Good, Easy), the card will be shown to you again at the appropriate time in the future. For example, if the answer came to you instantly, you would click Easy. If the answer comes to you instantly again when you see the card the next time, clicking Easy again will increase the time Anki waits before showing you the card again. The Again, Hard, Good, Easy values are not static, and depend on multiple factors that change with each repetition of the flashcard.

Getting into the routine of reviewing the flashcards once every day is important to retaining the knowledge. By default, Anki will introduce 20 new flashcards to you every day per deck. This value (like just about everything else in Anki) can be adjusted. The cards can be sequential (default) or randomized (which is what I set it to). If you make your flashcards simple enough, 20 may be a very good value for you. If you have a deck of 200 cards, it will take 10 days for all of the cards to be revealed to you.

However, in addition to the 20 new cards, each day will contain previous cards depending on how you rated them. If you rated a card as “Hard” yesterday, you’ll probably see it repeated today. This is what I have found to be so useful over the past two weeks.

I may have marked several cards as “Again”, which will show you the card again during the day’s study session. After a couple days of marking a card as “Again”, I might have marked the same card as “Hard”, and after a few more repetitions, the card becomes “Good”, and hopefully eventually “Easy”. I haven’t been using the program long enough yet for some of my cards to make that complete progression, but I can see it getting there, which is exciting! Yet another great thing about Anki is that it keeps statistics with regards to your learning, and you can view your progress on nice pretty graphs.

Because cards that are marked “Easy” get displayed less, you waste less time studying those cards because you’ve retained that information, so you can study other cards that are more important. Before using Anki, that was a very bad habit I found myself falling into frequently: studying things I already knew, because it’s easier.

Anki supports sharing the flashcard decks you create. This may be useful if you want to import somebody else’s work, but personally, I found much more value in creating my own flashcards with my own questions and answers because it forces me to examine the individual piece of information and then figure out how to formulate it into an answerable question (which is not always as easy as you might think it is).

When you’re studying for something complicated, such as a certification, it may contain many details that are important to know, but difficult to retain because you don’t frequently need that information. Going back to the EIGRP example, you need to know what the default K-values are for some Cisco certification exams, but in a production network, it is rare to actually need to know that exact detail, and it is even more rare for those values get changed. However, through the power of spaced repetitions, it is a piece of information that you can hold on to.

And who knows? Outside of a certification exam, maybe one day you’ll run into a situation where that particular bit of information really is helpful, and that is when knowledge and experience will combine to give you the solution you need.

On a personal note, it may sound silly considering I am 36 years old as I write this, but during this past year I really feel like I am finally learning how to learn. I feel like I am discovering things that I should have been taught in high school or college. I would certainly have had an easier time with the more difficult subjects if I knew then what I know now.

Bringing an Old Mac Pro Back to Life with ESXi 6.0

downloadIt’s been quite a while since I’ve done a purely technical post.

The original Mac Pro is a 64-bit workstation-class computer that was designed with the unfortunate limitation of a 32-bit EFI. The two models this post discusses are the original 2006 Mac Pro 1,1 and the 2007 Mac Pro 2,1 revision. Both systems are architecturally similar, but the 2006 model features two dual-core CPUs, while the 2007 model has two quad-core CPUs, both based on the server versions of Intel Core 2 chips. I have the 2007 version, which has two Intel Xeon X5365 CPUs for a total of eight cores.

Apple stopped releasing OS X updates for this computer in 2011, with 10.7 Lion being the final supported version. There are workarounds to get newer versions of OS X to run, and a similar concept is being used to make newer versions of ESXi to run. On a side note, getting the newer versions of OS X to run on these old Mac Pros works pretty well, as long as you have the necessary hardware upgrades, which includes a newer video card and potentially newer wi-fi/bluetooth cards.

Like older versions of OS X, older versions of ESXi booted and installed without issue on the old Mac Pros. But at some point, ESXi stopped being supported on these model Macs, due to newer systems using EFI64, and older systems being stuck at EFI32. However, even though it is nearly 10 years old, the 2007 Mac Pro did have eight Xeon CPU cores (and the two quad-core CPUs combined have the same computational power as a single Sandy Bridge-era Core i7 CPU), and is capable of housing 32 GB of RAM, plus four hard drives (six if you don’t care about the drives being seated properly), and has four full-length PCI-e slots and two built-in Gigabit Ethernet ports.

This computer is more than worthy for lab use, and could definitely serve other functions (such as a home media server or NAS). Additionally, when running ESXi, you do not need to have a video card installed, which frees up an extra PCI-e slot.

To get ESXi 6.0 (I used Update 2) to run on the old Mac Pro, you need the 32-bit booter files from an older version of ESXi. The process involves creating an installation of ESXi 6.0 and then replacing the files files included in the link on the new installation.

To do this, I installed ESXi 6.0 Update 2 into a VM on a newer Mac running VMware Fusion, and using the physical disk from the Mac Pro. The physical disk may be attached to the newer Mac using any attachment method (USB, etc). I have a Thunderbolt SATA dock that I used. VMware Fusion does not let you attach a physical disk to a VM from within the GUI, but it can be done.

After creating the VM, attaching the physical disk, and booting from the ESXi ISO image, I installed ESXi, choosing to completely erase and use the entire physical disk. After installation, you may wish to do like I did and boot up the VM before you replace the EFI files. The reason is so that you can set up the management network. By setting this up in advance, you can run your Mac Pro headless, and just manage it from the network.

After you have installed ESXi in the VM onto the physical disk (and optionally set up the management network options), shut down the VM, but leave the physical disk attached. Go to the Terminal, type “diskutil list” without quotes, and look for the partition that says “EFI ESXi”. Make a note of the identifier (it was disk4s1 in my case). Enter “diskutil mount /dev/disk4s1” or whatever yours may be.

Use the files included in the ZIP to replace:


Then unmount the physical disk with “diskutil unmountdisk /dev/disk4” (changing 4 to your actual disk; don’t specify the individual partition). Then connect the disk to your Mac Pro, power it on, and have fun.

By having ESXi installed on a Mac Pro, you are able to install OS X virtual machines without requiring the VMware Unlocker workaround. Additionally, with four PCI-e slots, you could add things like Fibre Channel HBAs, multi-port NICs, USB 3.0 cards, etc.

The downside to using a Mac Pro 1,1 or 2,1 today, though, is its power usage and heat output. This is due to two primary factors: the CPUs and the RAM. Both are considered horribly inefficient and power hungry by today’s standards (but what do you expect with 10-year old technology?). The two CPUs each have a TDP of 150W. Nearly all of the Intel Xeon CPUs produced today (even the most expensive ones) run much cooler than this. The other culprit is the DDR2 FB-DIMM RAM.

To provide some perspective, I plugged in my handy Kill-a-Watt to see what kind of power was being used. I thought the bulky X1900 XT video card that came with the system would be a large part of the equation, but that turned out to not be true. With the video card, 32 GB of RAM (8x4GB), and a single SSD, the system consumes about 270W idle! Take out the video card, and it idles at 250W. Take out 24 GB of memory (leaving two 4GB sticks installed), the power drops to 170W. So that means the FB-DIMMs alone consume about 100W altogether. I calculated that where I live, it would cost about $1 a day in electricity to keep it running 24/7.

For perspective, my main server, which houses two quad-core Nehalem Xeons (which are about 7 years old as I write this), 48 GB of RAM (6x8GB DDR3 DIMMs), and 12 hard drives, uses a total idle power of 250W. A typical modern desktop PC probably uses less than 100W.

Another potential disadvantage is the Mac Pro 1,1 and 2,1 has PCI-e version 1.1 slots, which are limited in bandwidth to 2.5 GB/s per lane. This may or may not be an issue, depending on the application, but don’t expect to be running any new 32GB FC cards with it.

Possibly the most serious disadvantage, especially with regards to lab usage, is that the CPUs in these model Macs, while they do support Intel VT-x, they do not support EPT, which was introduced in Intel’s next microarchitecture, Nehalem. EPT, Extended Page Tables, otherwise known as SLAT, Second Level Address Translation, is what allows for nested hypervisors. This means you can’t run Cisco VIRL on these model Mac Pros.

So for me, reviving the old Mac Pro is good for lab purposes, and I turn it off when I’m not using it to save electricity. It seems more fitting to me to use the technology in this way, rather than for it to simply become a boat anchor, though it would certainly work well in that application, as the steel case is quite heavy!

Experiences with Cisco VIRL Part 2: INE’s CCIE RSv5 Topology on VIRL

This blog entry was also featured on


VIRL topology + INE RSv5 ATC configs

After getting VIRL set up and tweaked to my particular environment, my next step is to set up INE’s CCIE RSv5 topology, as this is what I will be using VIRL for the most, initially.

I was satisfied with using IOL, but I decided to give VIRL a try because it not only has the latest versions of IOS included, it has many other features that IOL in itself isn’t going to give you. For example, VIRL includes visualization and automatic configuration options, as well as other features like NX-OSv. I was particularly interested in NX-OSv since I have also been branching out into datacenter technologies lately, and my company will be migrating a portion of our network to the Nexus platform next year. At this point in time, NX-OSv is still quite limited, and doesn’t include many of the fancier features of the Nexus platform such as vPC, but it is still a good starting point to familiarize yourself with the NX-OS environment and how its basic operation compares to traditional Cisco IOS. Likewise, I intend to study service provider technologies, and it is nice to have XRv.

I configured the INE ATC topology of 10 IOSv routers connected to a single unmanaged switch node. I then added four IOSv-L2 nodes, with SW1 connecting to the unmanaged switch node, and then the remaining three L2 nodes interconnected to each other according to the INE diagram. The interface numbering scheme had to change, though. F0/23 – 24 became g1/0 – 1, f0/19 – 20 became g2/0 – 1, and f0/21 – 22 became g3/0 – 1.

I built this topology and used it as the baseline as I was testing and tweaking the VIRL VM, as described in Part 1. I was familiar with how the topology behaved in IOL, as well as with using CSR1000Vs and actual Catalyst 3560s, and that was my initial comparison. After getting things to an acceptable performance level (e.g. ready to be used for studying), I realized I needed a way to get the INE initial configurations into the routers, and I would prefer to not have to copy and paste the configs for each device for each lab every time I wanted to reload or change labs.

One of the issues I experienced with VIRL is that nothing is saved when nodes are powered off. If you stop the simulation completely, the next time you start it, everything is rebuilt from scratch. If you stop the node itself, and then restart it, all configurations and files are lost. There is a snapshot system built in to the web interface, but it is not very intuitive at this point in time. Likewise, you have the option of extracting the current running configurations when the nodes are stopped, but this does not include anything saved on the virtual flash disks. Some people prefer having a separate VIRL topology file for each separate configuration, but I find it to be more practical (and faster) to use the configure replace option within the existing topology to load the configurations.

Luckily, the filesystem on the VIRL host VM is not going to change between simulations, and all of the nodes have a built-in method of communicating with the host. This makes it an ideal place to store the configuration files. I went through and modified the initial configurations to match the connections in my VIRL topology. You can download the VIRL topology and matching INE configurations I assembled here. For the routers, this included replacing every instance of GigabitEthernet1 with GigabitEthernet0/1. The switch configs were a little more involved and required manual editing, but there’s not nearly as many switch configurations as there are router configurations. After getting the configuration files in order, I used SCP to copy the tar files to the VIRL VM using its external-facing (LAN) IP address. I placed the files into /home/virl/.

Originally, I added an L2-External-Flat node to match every router and switch in the topology so that each node could communicate with the VIRL host VM. However, someone pointed out to me that there was a much easier way to do this: click the background of the topology (in design mode), select the “Properties” pane, then change the “Management Network” setting to “Shared flat network” under the “Topology” leaf. This will set the GigabitEthernet0/0 interfaces to receive an IP address via DHCP in the range, by default. This setting only applied to the router nodes when I tried it, so I still had to manually edit the configurations of the IOSv-L2 switch nodes.

For the four IOSv-L2 switch node instances, I used this configuration:

interface GigabitEthernet0/0
 no switchport
 ip address dhcp
 negotiation auto

It is very important to note that when you convert the port to a routed port on IOSv-L2, you need to remove the media-type rj45 command. This does not need to be done on the IOSv routers, though. The IOSv nodes were configured as:

interface GigabitEthernet0/0
 ip address dhcp
 duplex auto
 speed auto
 media-type rj45

My original intention was to modify the initial startup configurations of the nodes to automatically copy the files to their virtual flash drives upon each boot via TFTP, but the issue I ran into was that the interfaces would remain shutdown until the configuration was completed. So even though I was able to put the necessary commands into the config (prefixed with “do”), the commands wouldn’t work from that point because the interfaces were automatically shutdown. However, placing other EXEC-mode commands at the end of the configurations (before the end line), such as do term len 0, may save you some extra steps when you start labbing.

Originally, I was planning to just use SCP to copy the files from the VIRL VM host to the nodes, but there is no way to specify the password within the command – the password prompt is always separate, so it is unusable as part of the configuration.

This led me to configure the VIRL VM as a TFTP server to access the configuration files. I modified some of the information as detailed on this site and performed these steps on the VIRL VM:

sudo su
apt-get update && apt-get install -y tftpd-hpa
vi /etc/default/tftpd-hpa

Modify the file as follows:

TFTP_OPTIONS="--secure --create"

And finally, restart the TFTP service to make the changes take effect:

service tftpd-hpa restart

However, I set up the TFTP server before discovering that placing the copy commands in the startup config was useless. So, setting up the VIRL VM as a TFTP server is an optional step; I just decided to stick with it because I’m used to using TFTP.

At this point, the configuration files are in placed on the VIRL host VM, and after booting the nodes in the topology, there is connectivity between the nodes and VIRL to copy the files.

If you perform a dir flash0: from any of the nodes, you will see that there is quite a bit of free space on that filesystem. However, I found out in attempting to copy files to it that it does not actually let you use that space. It would not let me copy all of the configuration files to it. Thankfully, the flash3: filesystem does.

Assuming you placed the tar files directly in /home/virl on the VM, the following commands will copy the configuration files to your node:

Using SCP:

archive tar /xtract scp://virl@ flash3:

The default password is all uppercase, VIRL

Using TFTP:

archive tar /xtract tftp:// flash3:

Replace “R1” with the actual device you’re copying the files to. With the default settings, the VIRL host VM will always be

After all the files are copied and extracted to your devices, you can use the Terminal Multiplexer inside the VM Maestro interface to issue a command such as this to all of the devices simultaneously:

configure replace flash3:basic.eigrp.routing.cfg force

So far, I have not yet had great luck with the IOSv-L2 instances. They were released as part of VIRL not too long ago, and have been making improvements with time. However, for studying for the CCIE R&S at this point in time, I will probably stick with the four 3560s in my home lab and bridge them to the IOSv router nodes.

VIRL is a pretty complex software package with lots of individual components. I’ve only had the software for a few days, so I haven’t had time yet to do a really deep dive, and there are probably even better ways to do some of the things I’ve described here. I wish it could be as fast as IOL, but by comparison, that really is the only major disadvantage of using VIRL instead of IOL. There are so many other features that do make the software worth it, though, in my opinion.

In years past, CCIE R&S candidates were known to spend thousands of dollars on equipment for a home lab. We are lucky enough today that computing power has caught up to the point where that is no longer the case. If you’re in the same boat that I am currently in, where your employer is not paying for training, then VIRL is a pretty good investment in your career. But of course, like anything else, you’ll only get out of it what you put into it. It’s not some kind of magic pill that will instantly make you a Cisco god, but it definitely has awesome potential for the realm of studying and quick proof-of-concept testing.

Experiences with Cisco VIRL Part 1: Comparing and Tweaking VIRL

This blog entry was also featured on


Since it has been out for more than a year, and has been developed and improved tremendously during that time, I decided to finally take the plunge and buy a year’s subscription to the Cisco VIRL software. Until now, I have been using any combination of real hardware, CSR1000Vs, and IOL instances for studying and proof of concept testing.

My first impression of VIRL is that it is a BEAST of a VM with regards to CPU and RAM consumption. I installed it on my 16GB MacBook Pro first, and allocated 8GB to it. However, its use was very limited as I was unable to load more than a few nodes. I then moved it to my ESXi server, which is definitely more appropriate for this software in its current state.

I knew that the CSR1000Vs were fairly RAM hungry, but at the same time they are meant to be production routers, so that’s definitely a fair tradeoff for good performance. The IOSv nodes, while they do take up substantially less RAM, are still surprisingly resource intensive, especially with regards to CPU usage. I thought the IOSv nodes were going to be very similar to IOL nodes with regards to resource usage, but unfortunately, that is not yet the case.

I can run several tens of instances of IOL nodes on my MacBook Pro, and have all of them up and running in less than a minute, all in a VM with only 4GB of RAM. That is certainly not the case with IOSv. Even after getting the VIRL VM on ESXi tweaked, it still takes about two minutes for the IOSv instances to come up. Reloading (or doing a configure replace) on IOL takes seconds, whereas IOSv still takes about a minute or more. I know that in the grand scheme of things, a couple of minutes isn’t a big deal, especially if you compare it to reloading an actual physical router or switch, but it was still very surprising to me to see just how much of a performance and resource usage gap there is between IOL and IOSv.

Using all default settings, my experience of running VIRL on ESXi (after going through the lengthy install process) was better than on the MBP, but still not as good as I thought it should have been. The ESXi server I installed VIRL on has two Xeon E5520 CPUs, which are Nehalem chips that are each quad core with eight threads. The system also has 48GB of RAM. I have a few other VMs running that collectively use very little CPU during normal usage, and about 24 GB of RAM, leaving 24 GB for VIRL. I allocated 20GB to VIRL, and placed the VM on an SSD.

The largest share of CPU usage comes from booting the IOSv instances (and maybe the other node types as well). The issue is that upon every boot, a crypto process is run and the IOS image is verified. This pegs the CPU at 100% until the process completes. This is what contributes the most to the amount of time the IOSv node takes to finish booting, I believe. This may be improved quite a bit in newer generation CPUs.

When I first started, I assigned four cores to the VIRL VM. The IOSv instances would take 5-10 minutes to boot. Performing a configure replace took a minimum of five minutes. That was definitely unacceptable, especially when compared to the mere seconds of time it takes for IOL to do the same thing. I performed a few web searches and found some different things to try.

The first thing I did was increase the core count to eight. Since my server only has eight actual cores, I was a little hesitant to do this because of the other VMs I am running, but here is a case where I think HyperThreading may make a difference, since ESXi sees 16 logical cores. After setting the VM to eight cores, I noticed quite a big difference, and my other VMs did not appear to suffer from it. I then read another tweak about assigning proper affinity to the VM. Originally, the VM was presented with eight single-core CPUs. I then tried allocating it as a single eight-core CPU. The performance increased a little bit. I then allocated it properly as two quad-core CPUs (matching reality), and this was where I saw the biggest performance increase with regards to both boot time and overall responsiveness.

My ESXi server has eight cores running at 2.27 GHz each, and VMware sees an aggregate of 18.13 GHz. So, another tweak I performed was to set the VM CPU limit to 16 GHz, so that it could no longer take over the entire server. I also configured the memory so that it could not overcommit. It will not use more than the 20GB I have allocated to it. In the near future, I intend to upgrade my server from 48GB to 96GB, so that I can allocate 64GB to VIRL (it is going to be necessary when I start studying service provider topologies using XRv).

I should clarify and say that it still doesn’t run as well as I think it should, but it is definitely better after tweaking these settings. The Intel Xeon E5520 CPUs that are running in my server were released in the first quarter of 2009. That seven years ago, as of this writing. A LOT of improvements have been baked into Xeon CPUs since that time, so I have no doubt that much of the slowness I experienced would be alleviated with newer-generation CPUs.

I read a comment that said passing the CCIE lab was easier than getting VIRL set up on ESXi. I assure you, that is not the case. The VIRL team has great documentation on the initial ESXi setup, and with regards to that, it worked as it should have without anything extra from their instructions. However, as this post demonstrates, extra tweaks are needed to tune VIRL to your system. It is not a point-and-click install, but you don’t need to study for hundreds of hours to pass the installation, either.

VIRL is quite complex and has a lot of different components. It is expected that complex software needs to be tuned to your environment, as there is no way for them to plan in advance a turnkey solution for all environments. Reading over past comments from others, VIRL has improved quite dramatically in the past year, and I expect it will continue to do so, which will most likely include both increased performance and ease of deployment.

Part 2 covers setting up the INE CCIE RSv5 topology.

Hey, Wait…I Thought You Started Blogging in 2012?

It’s true, I did start this blog in October 2012. In June 2018, I made the decision to prune all of my entries before December 2015. I spent a couple of hours reading over the majority of these entries and realized they are no longer relevant to my life and current career trajectory.

When I started this blog, I was just entering into the vast world of network engineering. I was not yet working in an environment that could take advantage of the new skills I was developing. That wouldn’t come until about three years later. Originally, my intention was to have a personal record of my career development and progression. My blog still serves this purpose for me in some ways.

A minor intention in blogging was to attempt to give myself a much-needed boost in self-esteem, especially since I had no peers to communicate these things with for so many years. I felt that by writing content for a community that I was just getting to know, it would serve as a form of self-validation, since I was unable to obtain it from my work at the time.

This is the part that has dramatically changed, especially since I passed the CCIE R&S written exam and made an attempt at the lab exam. Whereas before I would have tried to project personal confidence, but not necessarily feel it within myself, I now feel true confidence in the things I say and do as a professional based both on the skills I’ve developed as well as the professional experience I have established.

I’ve come to this realization within the past month or so, and it occurred to me that this is the point I had hoped to reach someday when I started blogging nearly six years ago. My oldest posts are filled with artificial confidence. My recent posts have a much different feel, and this is what I wish to project publicly moving forward. Out with the old, in with the new, such is the world of technology. 🙂