Geoff and I actually decided to drop Q-Logic (QLGC) from The Avid Hog’s list of candidates a while ago. Q-Logic doesn’t fit the kind of business The Avid Hog is looking for. But I think it may fit the taste of some readers of the blog. That’s why I’m writing this post.
Parent and Child Turned into Competitors
The story started in 1994 when Emulex spun off Q-Logic. Q-Logic was then focused on disk controllers that help CPUs communicate with disk drives. Emulex focused on its networking business.
Servers in data centers used to have disk drives that are attached directly to them. But as the number of servers deployed in a data center increased, it became more difficult to manage isolated storage resources. Therefore, a new storage architecture named Storage Area Network (SAN) was developed.
SAN allows data center to centralize storage into one place. Servers get data from the storage pool through a network. SAN helps reduce cost because it reduces excess capacity required for each server. SAN also makes it easier to manage storage or add more storage.
A protocol is required for servers to talk to the storage pool. Fibre Channel (FC) is most popular protocol for SAN. FC to SAN is like Ethernet to Local Area Network (LAN).
The development of SAN means the convergence of storage and networking. Emulex with its core in networking made experiments with SAN. Q-Logic with its focus on disk controllers made an early investment in SAN. The parent and child turned into competitors. But that was a lucrative rivalry.
High Switching Cost Results in a Wide Moat
Both Emulex and Q-Logic produce FC adapters. FC adapters sit in servers and help servers talk to the SAN. As is the case in any hot industry, there were dozens of competitors. But the industry eventually settled, and Q-Logic and Emulex together have held over 90% of market share over the last decade.
Both companies’ margins are very high. Q-Logic has about a 67% gross margin. Emulex has about a 64% gross margin. That’s incredible! That means Q-Logic can consistently charge giant customers like IBM, HP, and Dell three times what it cost to make a FC adapter. That’s a good indicator of some competitive advantage.
It turns out that the competitive advantage comes from high switching cost. FC adapter accounts for less than 7% of total SAN cost. Yet, it’s critical to the SAN’s performance. And people who manage the storage are conservative. Their job is to provide reliable and consistent availability of information. They just want to maintain the robustness of the SAN. My discussion with an industry expert illustrates this switching cost:
Expert: For storage environments, companies need to fully test interoperability and performance of a driver with the applications. The testing and validation of a new driver can take over 6 months, so there is a disincentive to change adapters. The operational expenses of managing the environment and maintaining the robustness of the stack typically outweighs any capital costs.
Quan: Are these tests performed during OEMs qualification or performed with each end-customer (enterprise customers)?
Expert: The vendors/OEMs do product interoperability (adapter/server/switch/storage), but customers need to bake their full software stack (applications). This is a per-customer basis.
Quan: When customers upgrade to 16Gbps FC, do they need to install new drivers and a new test that would take 6 months? Or do Q-Logic and Emulex produce new HBAs or CNAs that work with old drivers that customers already had?
Expert: While the 16Gb FC HBAs do have the same drivers, large enterprises typically will still do the testing. There is no "standardized" application stack, so while the OEMs do plenty of testing (enough for some customers), the big customers that buy the bulk of the adapters will still need to verify. From the time that a new generation of adapter is released, it usually takes at least 3 years before 1/3 of sales are the new generation. Baking out the end-to-end takes time (16Gb FC still isn't on most storage arrays today), negotiating new contracts, running through old inventory, etc. The HBA business is an arcane one...
Quan: When large enterprises upgrade their SAN and do the testing, does the testing interrupt the data center's operations? And would they upgrade all components of the SAN (HBAs and switches) at the same time or would they upgrade switches before HBAs?
Expert: Switch upgrades (transparent to the application, no driver) are much easier than HBA. Typically new HBAs are put in new servers, so there is the waiting for a refresh cycle to do it. The whole environment is usually tested in a sandbox.
Once a customer successfully tests a FC adapter, there’s no incentive to switch. Q-Logic wisely make new adapters compatible with existing drivers. So, customers naturally stick to Q-Logic through each upgrade cycle. Also, Q-Logic has a reputation of being the first to market. They were always the first to introduce a new adapter in each upgrade cycle. That helped them gradually increase market share.
Q-Logic Repurchased a Half of Its Shares
The FC adapter business became Q-Logic’s cash cow. Since 2001, Q-Logic generated $1.8 billion of free cash flow, averaging $139 million a year.
Unlike management of most tech companies, Q-Logic’s management just focused on protecting the core. They made some investments adjacent to the core but couldn’t find a new cash cow. Most of free cash flow was used to repurchase shares. Since 2001, they spent $1.9 billion in share repurchase. The share base declined from 185 million in 2001 to 89 million today.
Mr. Market Is Extremely Pessimistic about Q-Logic
Q-Logic currently has $433 million of cash. The market cap is $1,080 million. So, they can buy 40% of the company immediately. The reason they didn’t do that is most of the cash is offshore. But it’s totally possible that they would borrow money in the US and repurchase shares like most tech companies. Given potential FCF from the core business, it’s safe to expect that if share price doesn’t go up, Q-Logic can eventually repurchase half of the company.
Q-Logic made only $83 million FCF last year. But they’re in the low point of a cycle. 40% of their business is from the government and financial sectors. Capital spending was weak in those areas in the last year. And it will be weak in the near future. Q-Logic is also spending a lot of money in some new products. R&D is higher than normal. But it’s for growth. Q-Logic can always cut back R&D and improve FCF.
So FCF may one day return to about $139 million a year. That translates into about $3.12 FCF per share. Even at 10x FCF, the stock price in 5 years would be $31.20. If that happened, it would provide a 21% CAGR from today’s $12 per share price.
It seems that Mr. Market doesn’t pay attention to Q-Logic’s cash balance. Q-Logic’s EV is $647 million. That’s close to Emulex’s EV of $586 million. However, Q-Logic has more market share than Emulex and is much more profitable.
The Core Business Is Declining
So, why did we drop Q-Logic?
It’s because of durability. I don’t have a clear view of Q-Logic’s future. There are two main threats to Q-Logic.
The first threat is from iSCSI. iSCSI is a protocol for SAN but using the LAN network. It’s cheaper and more convenient than FC. But it’s not as robust. iSCSI is strong with small enterprises that have fewer than 100 servers. Although iSCSI didn’t make a dent into FC, it did block FC’s growth.
The second threat is from cloud computing. It’s widely accepted that enterprises are shifting computing to the cloud. Unfortunately, cloud service providers may use different storage architectures. Big cloud providers like Google, Amazon, or Facebook use commodity storage instead of the expensive SAN. Other solutions might be iSCSI, network-attached storage, etc.
I think Fibre Channel will stay for many years. Q-Logic’s customers are big enterprises. There’s a huge institutional inertia to move their computing to the cloud. Sequoia’s discussion on IBM shows they have the same expectation:
IBM has said that the cloud is probably its biggest threat. IBM has said it is a $7 billion revenue opportunity but that $4 billion would be a shift of revenue that the company could lose to the cloud. So the management thinks it is a $3 billion opportunity over the current roadmap. That shift would happen — and that is obviously what you would be most concerned about — that shift would primarily happen in hardware because cloud vendors would buy a lot of commodity servers, put them all into a pool and sell by the drip again. Their scale could really further commoditize that business. Luckily, IBM is relatively small in that area. There are also some services that could be obviated. IBM does provide a lot of services to build the internal IT infrastructure, which could be outsourced to the cloud. But typical large enterprises have not just one or two applications, but thousands if not tens of thousands of applications. Today, they often rely on IBM to manage them. Even if a lot of these applications moved to the cloud, they would still require the ones that remain onsite to be able to communicate with the ones that they have moved to the cloud. Companies would still need integration and training to make sure that their business processes work with the new applications. So while there may be some shift away, we still think that IBM services will have work to do even in an era of cloud applications. We think that the trend is quite long-term. As I said, the spending on cloud computing is nascent right now. IBM is cognizant of the fact that over time this will be a more desirable way to consume corporate IT. Recently management has said that outside of a couple of core applications, things that are unique to a company and the company would want to keep in-house, up to 90% of the rest of the applications could be in the cloud, but it will take a very long time. In the meantime customers will want IBM to help them with that transition. IBM, by the way, sells ... you should think of it as, to a significant degree, a software company because 45% of the profits are coming from software. IBM is going to offer that service or all that software as a service over the cloud itself. So IBM will be able to take advantage of that opportunity as well.
Roughly speaking, 90% of servers in big enterprises will be moved to big cloud over a very long time. Meanwhile, small companies or companies growing big wouldn't build a big in-house data center. So, Q-Logic's core business will decline gradually.
The Avid Hog Wants Almost Risk-Free Investments
Q-Logic is definitely not a good buy and hold investment. It might be cheap. A turn in spending cycle may give investors some last (good) puffs. But it may or may not happen.
So, an investment in Q-Logic has high potential return but also high risk. It would fit a diversified investment portfolio perfectly. But it doesn’t fit The Avid Hog. The Avid Hog is interested in investments with almost zero risk. So, we want a sustainable moat. We want durable products. And we avoid “unpredictable” declining businesses.
Q-Logic is in an “unpredictable” declining business. The Men's Wearhouse (MW) might be in a declining business but it’s not unpredictable. There is always demand for suits, and surviving companies don’t necessarily decline. But for Q-Logic and Emulex, we don’t know how fast the demand for FC will decline. And in tech, declining really means dying.
Try Before You Buy: To sample the current issue of Geoff and Quan’s newsletter, The Avid Hog, just email Subscriber Services and ask for a copy.