Attempting to Avoid Obsolescence

This post is pretty long and meandering. The crux of it is that I wrote and used my first Python script on a production network yesterday, and it made me pretty damn happy to see the results.

Now for the Director’s Cut:

I’m sure the title represents something I will be forced to revisit many times during my career, and we all know that everything old is new again (Rule 11, naturally). As I get deeper into my career and expand both my experience and base of knowledge, I see Rule 11 all the time. When you’re first starting out, you may be aware of the concept, but it definitely takes time and experience to truly appreciate it.

A couple months ago, I started looking more into scripting and network automation. I mentioned Rule 11 because I am well aware of the fact that, despite all the industry buzz during the past couple of years, this is not in any way new. Before Python took over as the scripting language of choice for network engineers, it was Perl.

However, scripting seems to remain a relatively important skill for network engineers dealing with systems of any sort of scale. Many of the end results of scripting can be simplified by purchasing expensive network management software. I have discovered, though, that even with the expensive NMS software in place, sometimes you would like to gather very specific information, and either the NMS doesn’t support what you’re looking for, or it is extremely unintuitive and/or cumbersome as to how to actually obtain the desired information from the software.

I enjoy learning the “classical” network engineering skills, such as the various routing and bridging protocols, and architecture and design. That is one of the motivations for me still studying for the CCIE. But, as I’ve written about in the past, it’s taking me a long time to get the CCIE because that is not my sole focus. I don’t want to be a one-sided engineer who can’t fathom thinking outside the world of Cisco, and I don’t want to be the kind of person that designs networks a certain way because “that’s how it was on the Cisco certification exam.”

On the other side, I’m not necessarily interested in being the “full stack” engineer, either. There are many sysadmin duties and responsibilities that I am glad I do not have (like worrying about people’s files and email). Yet, I know enough about Windows and Linux to do the things that I need to do. I know the basics of wireless networking and IP telephony. I have a decent level of VMware vCenter knowledge and experience (mostly through breaking and fixing things in my home lab). I also have some knowledge of SANs and storage systems, though I will also admit that I find the networking aspect of SANs very appealing and may explore that at a deeper level later on.

Still, there’s enough industry talk about scripting and network automation that I decided it was time to investigate a little on my own. I will very readily admit that the idea of working with APIs where you can send and receive actual, exact information, as opposed to what you get with screen scraping, is very appealing. It’s going to take awhile before I reach that point, especially considering the vast majority of equipment I work with currently is 10-15 year-old classic Cisco IOS-based. That means interaction via screen scraping.

I started my current job a little more than a year ago. While my last job gave me the absolute tiniest taste of enterprise-level network engineering, my current job has been full-on, giving me so much of the experience I have desired for the past several years. While I may be working with primarily older technology (there’s some brand new stuff sprinkled in here and there), it is at a scale that is large enough where certain automated tasks begin to make sense.

When I started, we had no configuration backup process in place for the routers and switches (that I was aware of). I discovered RANCID, and built my own server from scratch. I had never worked with this software prior, and while we are using SolarWinds Orion for some things, we are missing the NCM piece, which I discovered was ungodly expensive.

Figuring out how to set up and use RANCID has been one of the best things I’ve done so far for myself. I have used it so many times for various things, and it has been a real time saver on occasion. Just the other day, I had to send out a replacement router, only I didn’t find out about it until 20 minutes before UPS was scheduled to do their normal pickup. By having the configuration already backed up in RANCID, I was able to quickly grab a router, erase its config, put the backup config on it, and get it packaged and shipped out within 20 minutes before the deadline.

I experimented with sending out certain commands to all of the routers via RANCID, and that was pretty neat, but felt kind of awkward for some reason. I knew that I would be able to do more if I learned how to script with something like Python. A couple months ago I started making my way through Learn Python the Hard Way, and I’ve also made it halfway through Kirk Byers‘ excellent Python For Network Engineers course.

I found learning some of the stuff pretty difficult at first. Part of it was me doing my best to let go of my attitude regarding programming. I got my Bachelor’s degree in Information Systems, not Computer Science, because I was more interested in networking, and not at all interested in programming. But as time goes on, I feel like I made the wrong choice. Not because I want to be a programmer now, but because having that background could have been more beneficial to me for the future. However, I am sure that writing scripts is not at all like being a full-on software developer so I should probably not make that comparison.

Right now I am still at the extreme beginner stage, and I know that I will need to go over the learning material several times if I wish to reach the point of being able to “think in code”. Still, I knew that I would need to start somewhere if I wanted to make what I am learning try to stick. Sometimes, the hardest part of learning is trying to see where and how what you’re learning can be applied. Coding is all about breaking down problems into the smallest amounts possible, and then recombining them into something meaningful.

This is something that can be very difficult to see at first. “I want to learn Python, but what do I want to do with it?” Beginning with the knowledge that you can do practically anything with it, as long as you know how, does not help at all. I needed to start smaller. I needed a single simple task that I could start with, and learn how to build upon it as needed. During this past week, I had been thinking about this more.

I wanted to start with a simple script that could take a list of hostnames or IP addresses of Cisco routers, send a command to them, and dump the results to a text file. On the surface, that sounds easy enough. Yesterday, my boss wanted me to gather information about how many of our branch offices had more than six Cisco phones, but did not have Cisco switches installed. I saw this as the perfect opportunity to finish what I had started and have an actual use case for the script.

Ultimately, the script would need to log into each specified Cisco router, issue a command (“sh cdp n | inc SEP” in this case), and dump the results to a text file. I was able to cobble together a script based on what I had learned from the first several chapters of Learn Python the Hard Way, the first half of Kirk Byers’ course, and a couple of quick web searches on how to write code for a couple of specific tasks. I created a GitHub account a few months ago, and as embarrassing as my first script might be, I decided to go ahead and post it to my account anyway. If nothing else, I figured it was something else that I could say I have done.

The first issue I ran into was scope creep. Should the script do this? Should it do that? It would be nice if it did this other thing. I saw how this could very quickly get out of control. I realized that, for now, I am still learning, and I just need to accomplish this simple single task at the moment. Keep it simple.

The second issue I ran into was errors I didn’t anticipate and wasn’t sure how the script would handle them. The first thing I experienced was when a host was not reachable. I knew from the start that this would be an issue, but I wasn’t sure how to handle it. Through a web search, I found out how to ping a host, and then return a True or False. I used this as the primary error catch.

Other errors I ran into, but have not yet resolved, include bad authentication and command timeout. Sometimes, the two are related. Upon reaching either of these failures, my current script just dies and does not handle the exception. This will probably be the first thing I try to improve upon.

Going back to the idea of scope creep, as I was putting the script together, I was thinking of all kinds of different ways that the script could be added to and enhanced…if only I knew how to do it. For example, in my current use case, I only wanted to know which locations have more than six Cisco phones, but do not have a Cisco switch. If I knew Python a little better, I could write a script to scour the network and present back to me only this exact information, instead of the extraneous information provided by screen scraping. However, this is something that will develop over time, because learning this stuff is just one of many skills I wish to gain experience in, and it all takes time.

In the end, the script did what I needed it to do, and gathered the information that I required. I was able to get the information I needed by letting the script run over a lunch break, whereas my alternative (getting information from our IPAM system) would have taken much longer, due to the specific information I needed. I was very excited by this, and I can see this turning into something important over time.

There’s a lot out there to learn, and it is very difficult sometimes to remain focused on any single thing. Many times, what you learn is dictated by the immediate business problem you need to solve. Rarely is what you need to learn singularly-focused, and learning it all takes time, and practice. It can be a delicate balance to not be pulled into too many directions at once, both in real life, and in “study life”. I’ll admit, that in itself is another skill I am still trying to learn.

A Year and a Half with the Packet Pushers

Packet PushersRouters…switches…firewalls…WANs…Internet: I first discovered the Packet Pushers Podcast about a year and a half ago, and as of this past week, I have finally caught up on all of the episodes. I wanted to write about how grateful I am to Greg, Ethan, Drew, and all the other participants (both hosts and guests) for conceiving of the show six years ago and sticking with it. This show and the community around it have made a very large impact in my life.

At first, I listened to about 10 shows on the topics that interested me the most at the time. Then, I discovered who my favorites were among frequent guests, and I listened to several more episodes containing those particular guests. I realized how much amazing information I was getting with regards to industry experiences and anecdotes for many technologies that I had not yet worked with directly, but wanted to, so I started listening to every episode from the beginning.

I wasn’t very happy with where I was professionally when I started listening. In every job I had had through that point in my life, I did not have any peers or friends who were interested in network engineering. In addition, I was living in a comparatively rural area where there wasn’t a lot of demand for the type of work that I had been training myself for. Packet Pushers opened up a larger world for me where I could listen to people doing the things I wanted to eventually do. Hearing people’s experiences about working with various networking technologies, and comparing that to what I had learned from studying these technologies for several years, helped to give me the confidence I needed to seek new employment that would take me in the direction of what I wanted to do in the realm of network engineering.

I interviewed and got hired by the company I presently work for. The only problem was that I lived two hours away, but told them I would be relocating to the area ASAP. It took about two months to save up to move (it costs a lot more to live in the city), and during that time I would spend four hours every weekday driving back and forth, and it was during these two months that I was able to listen to a very large portion of the Packet Pushers back catalog.

Having listened to nearly every episode (and skipping only those very few that held less interest for me), it was very interesting to hear the podcast change over time. I believe the content has always been top-notch from the very beginning, but it was kind of funny to hear the various audio issues present during the first year or so. Eventually, the audio reached an extremely professional-level quality that sounds absolutely superb.

I thought it was excellent and very smart to branch out from the main show and start developing others under the Packet Pushers umbrella. What a great idea to start the Priority Queue as an avenue to discuss topics containing more specific, detailed, and sometimes niche content that are wonderful to listen to, but might not necessarily appeal to a more general audience. Healthy Paranoia was fun to listen to when it was in production. “The Coffee Break” developed into the Network Break, and eventually took over as my favorite of all the podcast series; it is definitely the show I look forward to the most each week.

Or maybe it’s Datanauts? What a superb series this is! While I am a network engineer at heart, I do have experience with Windows, Linux, VMware vSphere, and storage. In my current role, though I am a network engineer for my company, I feel like I often act as the bridge between all these various silos as people come to me for questions. Listening to this show and its attitude toward silo-busting has been wonderful, and has given me confidence to act as the occasional bridge for the various silos in the workplace.

Eventually I would like to take my career in the direction of network design and architecture. This requires knowledge of and interaction with the business side of things. For this, I have very much appreciated listening to The Next Level. I love how they discuss various topics that relate to IT, but are not necessarily directly about technology itself.

I also very much enjoy the topics discussed on the Infotrek series. This is the newest series as of this writing, and I feel like the topics they discuss very much round out the overall topic scope of the entire Packet Pushers umbrella of shows.

I think it is great that Greg and Ethan were able to take something small and stick with it until it grew to the size of being able to take over as full time employment for them. Episode 300 of the main show is only a couple weeks away, and I congratulate them on their success. Their dedication to the Packet Pushers is demonstrated in everything they do with regards to the community. Adding Drew to the lineup was also a great move because he does an excellent job of managing the site and interacting with the community. Plus, he is excellent at keeping the Network Break on-topic and moving along during the show.

And speaking of the community, how extremely generous of them to open up both their site and podcast to community content! This is another way in which Packet Pushers has impacted my life. At the end of last year, I wrote a post on my experiences with Cisco VIRL. Greg saw this post and told me it would be great to have on the Packet Pushers website. I felt extremely honored by this. I have since posted one other article, and will probably post again sometime in the future. Due to the popularity of the Packet Pushers and the exposure it gave me, someone from a major networking website discovered me, took interest in my writings, and offered to have me write a paid article for their site. This was all due to the enormous generosity of Greg, Ethan, and Drew encouraging openness and participation in the Packet Pushers community.

After listening for a year and a half straight, it feels kind of weird to be caught up finally. Now I’ll get to experience what many others have had for a while: waiting each week for the new episodes to be released. Until now, I’ve been able to see what was coming up next. Now, like everyone else, I will have to be patient and see what arrives, which has its own level of excitement.

I give my most sincere thanks to Greg, Ethan, Drew, and all other hosts, guests, and people who have otherwise participated in generating content for the Packet Pushers community. You truly do have an impact in people’s lives, and I am grateful for your part in helping to make me a better network engineer.

One Year In…And In…

One year ago today, I started working for my current company. I was thinking about writing my usual long introspective post, but upon reviewing my posts over the last year, I believe I’ve already covered my feelings very well, and they haven’t changed much.

I am still very happy, and I continue to gain excellent experience that is helping to propel me forward in my career. Over the course of the last year, I have become much more comfortable with myself, and with others.

I can feel the difference in myself when I talk to people now, and it is quite an amazing feeling. The validation of myself and my professional skills have been life-changing.

I’m looking forward to what the next year may bring in my professional life.

Anki, My New Love

ankiUntil now, I was never one to use flashcards. I could not see their value, and I was too lazy to actually write things down on a paper flashcard (and my handwriting is horrible).

I recently discovered a program called Anki. On the surface, it is just a flash card program, but underneath, it can be as simple or as complex as you desire. The first couple of days that I used Anki, I was still in this mindset that flashcards are not for me, and they hold no value with how I am used to learning.

Wrong!!

What makes Anki so great (in addition to being free for every platform except Apple iOS), is the way it works. Active recall and spaced repetitions are what make it such a powerful program. As mentioned in the link, active recall is the process of answering an asked question, as opposed to just passively studying (such as reading or watching training videos). Spaced repetition is the action of spreading out the review of material in gradually longer increments, with the idea being that you’ll remember things for a longer period of time by doing this.

Anki is based an another program called SuperMemo. I had first heard of SuperMemo a few years ago after reading this blog post by Petr Lapukhov (who is one of many people that I consider a Rock Star in the world of computer networking). A lot of research went into the development of SuperMemo (and consequently, Anki), and Anki attempts to solve some of the perceived shortcomings of SuperMemo.

After using it for nearly two weeks, I am already experiencing the benefit of learning using this method. I am retaining details from the flashcards I created that I know I would have forgotten (because I’ve learned and forgotten these things in the past, probably more than once!).

The flashcards are arranged into decks, and decks can contain other decks. The cards themselves can contain pretty much any content you can think of, including audio and video. Cards can also contain tags, which I’ve found to be extremely useful.

For example, even though I am studying the overall topic of “CCIE Routing & Switching”, I have multiple sub-decks, with each deck representing a source of information (such as a particular book, or a particular video series). Yet I can relate the different decks together with the use of tags. For example, I could study on the EIGRP tag across all the sub-decks.

One of the most useful things I have learned about creating flashcards is to not put too much on a single card. I found it better to break things up as much as possible. This helps with faster recall, and since you’re not actually using paper, it doesn’t matter how many cards you create.

For the first couple of days, I had a few cards that contained too much information, and I kept getting the answers wrong. After I broke each complicated card into multiple simpler cards, I was able to retain the information better with each successive pass.

What lead me to create more complicated cards at first was knowing that, for example, if I’m studying for the CCIE, it’s an advanced test with expert-level questions. I thought I would be doing a disservice to myself by making the flashcards too easy. Luckily, I quickly realized that this is the wrong approach. The reason for using the flashcards is to retain little pieces of information, whose aggregate can then be applied to something more complex.

When making easier cards, I try to contain only a single piece of information in the answer whenever possible. When it’s not possible, I try to formulate the question so that it indicates the number of components in the answer. I also modified the default flashcard format to display the associated tags I have given the flashcard, which can act as a hint if the question seems too ambiguous.

The style of flashcard will depend on what you’re trying to learn. For example, if you’re learning a foreign language, you may place the foreign word on the front, and the native word on the back (or vice versa). For me, I found taking simple facts and re-phrasing them as simple questions to be the most effective. I find the question “What IP protocol does EIGRP use?” more engaging than simply “EIGRP IP Protocol” or something similar. IP Protocol 88 is the answer, by the way.

At first, I was worried about the questions being too easy. This is a simple question, and duh, the answer is obvious! But, the answer is always obvious as you are writing the question. A few days or a week later, the answer may not be so obvious. This was what I discovered after using the program for about two weeks. I remember writing the question, and I remember the answer being something very easy…but I couldn’t remember what the answer was.

Enter spaced repetitions.

After you have created the flashcards, reviewing is just like a real flashcard; you look at the front, and recall what is on the back. What makes Anki work so well is that upon revealing the back of the card, you have to decide how difficult or easy recalling the answer was. This is where you need to be truly honest with yourself to get the most out of the software.

Depending on what you click (Again, Hard, Good, Easy), the card will be shown to you again at the appropriate time in the future. For example, if the answer came to you instantly, you would click Easy. If the answer comes to you instantly again when you see the card the next time, clicking Easy again will increase the time Anki waits before showing you the card again. The Again, Hard, Good, Easy values are not static, and depend on multiple factors that change with each repetition of the flashcard.

Getting into the routine of reviewing the flashcards once every day is important to retaining the knowledge. By default, Anki will introduce 20 new flashcards to you every day per deck. This value (like just about everything else in Anki) can be adjusted. The cards can be sequential (default) or randomized (which is what I set it to). If you make your flashcards simple enough, 20 may be a very good value for you. If you have a deck of 200 cards, it will take 10 days for all of the cards to be revealed to you.

However, in addition to the 20 new cards, each day will contain previous cards depending on how you rated them. If you rated a card as “Hard” yesterday, you’ll probably see it repeated today. This is what I have found to be so useful over the past two weeks.

I may have marked several cards as “Again”, which will show you the card again during the day’s study session. After a couple days of marking a card as “Again”, I might have marked the same card as “Hard”, and after a few more repetitions, the card becomes “Good”, and hopefully eventually “Easy”. I haven’t been using the program long enough yet for some of my cards to make that complete progression, but I can see it getting there, which is exciting! Yet another great thing about Anki is that it keeps statistics with regards to your learning, and you can view your progress on nice pretty graphs.

Because cards that are marked “Easy” get displayed less, you waste less time studying those cards because you’ve retained that information, so you can study other cards that are more important. Before using Anki, that was a very bad habit I found myself falling into frequently: studying things I already knew, because it’s easier.

Anki supports sharing the flashcard decks you create. This may be useful if you want to import somebody else’s work, but personally, I found much more value in creating my own flashcards with my own questions and answers because it forces me to examine the individual piece of information and then figure out how to formulate it into an answerable question (which is not always as easy as you might think it is).

When you’re studying for something complicated, such as a certification, it may contain many details that are important to know, but difficult to retain because you don’t frequently need that information. Going back to the EIGRP example, you need to know what the default K-values are for some Cisco certification exams, but in a production network, it is rare to actually need to know that exact detail, and it is even more rare for those values get changed. However, through the power of spaced repetitions, it is a piece of information that you can hold on to.

And who knows? Outside of a certification exam, maybe one day you’ll run into a situation where that particular bit of information really is helpful, and that is when knowledge and experience will combine to give you the solution you need.

On a personal note, it may sound silly considering I am 36 years old as I write this, but during this past year I really feel like I am finally learning how to learn. I feel like I am discovering things that I should have been taught in high school or college. I would certainly have had an easier time with the more difficult subjects if I knew then what I know now.

Bringing an Old Mac Pro Back to Life with ESXi 6.0

downloadIt’s been quite a while since I’ve done a purely technical post.

The original Mac Pro is a 64-bit workstation-class computer that was designed with the unfortunate limitation of a 32-bit EFI. The two models this post discusses are the original 2006 Mac Pro 1,1 and the 2007 Mac Pro 2,1 revision. Both systems are architecturally similar, but the 2006 model features two dual-core CPUs, while the 2007 model has two quad-core CPUs, both based on the server versions of Intel Core 2 chips. I have the 2007 version, which has two Intel Xeon X5365 CPUs for a total of eight cores.

Apple stopped releasing OS X updates for this computer in 2011, with 10.7 Lion being the final supported version. There are workarounds to get newer versions of OS X to run, and a similar concept is being used to make newer versions of ESXi to run. On a side note, getting the newer versions of OS X to run on these old Mac Pros works pretty well, as long as you have the necessary hardware upgrades, which includes a newer video card and potentially newer wi-fi/bluetooth cards.

Like older versions of OS X, older versions of ESXi booted and installed without issue on the old Mac Pros. But at some point, ESXi stopped being supported on these model Macs, due to newer systems using EFI64, and older systems being stuck at EFI32. However, even though it is nearly 10 years old, the 2007 Mac Pro did have eight Xeon CPU cores (and the two quad-core CPUs combined have the same computational power as a single Sandy Bridge-era Core i7 CPU), and is capable of housing 32 GB of RAM, plus four hard drives (six if you don’t care about the drives being seated properly), and has four full-length PCI-e slots and two built-in Gigabit Ethernet ports.

This computer is more than worthy for lab use, and could definitely serve other functions (such as a home media server or NAS). Additionally, when running ESXi, you do not need to have a video card installed, which frees up an extra PCI-e slot.

To get ESXi 6.0 (I used Update 2) to run on the old Mac Pro, you need the 32-bit booter files from an older version of ESXi. The process involves creating an installation of ESXi 6.0 and then replacing the files files included in the link on the new installation.

To do this, I installed ESXi 6.0 Update 2 into a VM on a newer Mac running VMware Fusion, and using the physical disk from the Mac Pro. The physical disk may be attached to the newer Mac using any attachment method (USB, etc). I have a Thunderbolt SATA dock that I used. VMware Fusion does not let you attach a physical disk to a VM from within the GUI, but it can be done.

After creating the VM, attaching the physical disk, and booting from the ESXi ISO image, I installed ESXi, choosing to completely erase and use the entire physical disk. After installation, you may wish to do like I did and boot up the VM before you replace the EFI files. The reason is so that you can set up the management network. By setting this up in advance, you can run your Mac Pro headless, and just manage it from the network.

After you have installed ESXi in the VM onto the physical disk (and optionally set up the management network options), shut down the VM, but leave the physical disk attached. Go to the Terminal, type “diskutil list” without quotes, and look for the partition that says “EFI ESXi”. Make a note of the identifier (it was disk4s1 in my case). Enter “diskutil mount /dev/disk4s1” or whatever yours may be.

Use the files included in the ZIP to replace:

/Volumes/ESXi/EFI/BOOT/BOOTIA32.EFI
/Volumes/ESXi/EFI/BOOT/BOOTx64.EFI
/Volumes/ESXi/EFI/VMware/mboot32.efi
/Volumes/ESXi/EFI/VMware/mboot64.efi

Then unmount the physical disk with “diskutil unmountdisk /dev/disk4” (changing 4 to your actual disk; don’t specify the individual partition). Then connect the disk to your Mac Pro, power it on, and have fun.

By having ESXi installed on a Mac Pro, you are able to install OS X virtual machines without requiring the VMware Unlocker workaround. Additionally, with four PCI-e slots, you could add things like Fibre Channel HBAs, multi-port NICs, USB 3.0 cards, etc.

The downside to using a Mac Pro 1,1 or 2,1 today, though, is its power usage and heat output. This is due to two primary factors: the CPUs and the RAM. Both are considered horribly inefficient and power hungry by today’s standards (but what do you expect with 10-year old technology?). The two CPUs each have a TDP of 150W. Nearly all of the Intel Xeon CPUs produced today (even the most expensive ones) run much cooler than this. The other culprit is the DDR2 FB-DIMM RAM.

To provide some perspective, I plugged in my handy Kill-a-Watt to see what kind of power was being used. I thought the bulky X1900 XT video card that came with the system would be a large part of the equation, but that turned out to not be true. With the video card, 32 GB of RAM (8x4GB), and a single SSD, the system consumes about 270W idle! Take out the video card, and it idles at 250W. Take out 24 GB of memory (leaving two 4GB sticks installed), the power drops to 170W. So that means the FB-DIMMs alone consume about 100W altogether. I calculated that where I live, it would cost about $1 a day in electricity to keep it running 24/7.

For perspective, my main server, which houses two quad-core Nehalem Xeons (which are about 7 years old as I write this), 48 GB of RAM (6x8GB DDR3 DIMMs), and 12 hard drives, uses a total idle power of 250W. A typical modern desktop PC probably uses less than 100W.

Another potential disadvantage is the Mac Pro 1,1 and 2,1 has PCI-e version 1.1 slots, which are limited in bandwidth to 2.5 GB/s per lane. This may or may not be an issue, depending on the application, but don’t expect to be running any new 32GB FC cards with it.

Possibly the most serious disadvantage, especially with regards to lab usage, is that the CPUs in these model Macs, while they do support Intel VT-x, they do not support EPT, which was introduced in Intel’s next microarchitecture, Nehalem. EPT, Extended Page Tables, otherwise known as SLAT, Second Level Address Translation, is what allows for nested hypervisors. This means you can’t run Cisco VIRL on these model Mac Pros.

So for me, reviving the old Mac Pro is good for lab purposes, and I turn it off when I’m not using it to save electricity. It seems more fitting to me to use the technology in this way, rather than for it to simply become a boat anchor, though it would certainly work well in that application, as the steel case is quite heavy!