2018 | 2019 | ||||||
Price: | 51.82 | EPS | 11 | 10 | |||
Shares Out. (in M): | 1,160 | P/E | 4.7 | 5.1 | |||
Market Cap (in $M): | 60,099 | P/FCF | 7.0 | 6.8 | |||
Net Debt (in $M): | 754 | EBIT | 11 | 14 | |||
TEV (in $M): | 61,722 | TEV/EBIT | 5.5 | 4.4 |
Sign up for free guest access to view investment idea with a 45 days delay.
This is a re-activation write-up – so I hope you will be kind enough to give me the ‘thumbs-up’. It took time to write because of its length and details! (I hence missed the deadline by a couple of days but wanted to polish it off).
I also hope I will get some feedback on my views of supply and demand – in essence that effective supply growth will be less than anticipated and demand will be greater.
There has been a lot of noise but a lack of depth in a lot of coverage of Micron. In the following I discuss the memory market. I highlight areas that I think the mainstream is missing. It is worth remembering when reading the following that even a single percent mismatch between supply and demand can impact prices – I hope to show that the mismatch is potentially big and growth.
(The following YouTube video is helpful background for anyone not familiar with semiconductor manufacturing - https://www.youtube.com/watch?v=aWVywhzuHnQ )
In November 2017 Morgan Stanley downgraded the memory sector. The basis of the downgrade was essentially it is cyclical and what goes up must come down. This hit Micron’s share price at the time. However hidden in the first table within the note was a 41% UPGRADE in eps for Micron.
More recently UBS did something similar claiming that SOMETIME in the next couple of years memory prices would come down and (shock, horror!) there might be new Chinese capacity of about 170,000 wafers per month being built, again sometime in the next two years. The issues with both these assertions is that that is business as usual – memory prices do generally fall (the industry is geared up for that) and the capacity that is coming on stream is planned, expected and needed.
In the following write-up I address the above issues and lots of other points to do with the memory industry. It will also touch on aspects that are relevant to Western Digital, Samsung and other companies.
The first thing to recognise is that not all memory is the same; and the production lines cannot be (easily) switched from one type to the other (actually we will discuss this in more detail later). So one needs to consider each type of memory separately. (At least some coverage of the above notes has confused different memory types).
For the longest part of the memory industry’s existence the key product has been DRAM (dynamic random access memory). This is memory that is fast but only retains its content whilst it has power. A typical desktop PC might have 8GB to 32GB. (1GB = 1 billion bytes).
ROM (Read Only Memory) is used in many devices to store the initial instructions when the device is switched on. Typically ‘NOR’ memory is used for this. The amount of NOR in a PC is very small compared to the amount of other types of storage. This ‘general rule’ of thumb applies to most other devices but there are some exceptions. Spansion / Cypress Semicoductor were strong in NOR. [Actually a lot of ‘ROM’ these days is EPROM – but that is probably too technical for most investors.]
NAND memory is also known as ‘flash’ memory and is the type of memory within SD and CF cards and ‘USB thumbdrives’. It is also the memory used for SSDs (solid state drives). It first really became a prominent growth driver to the industry when Apple came out with its first solid state iPOD.
SSDs are displacing hard disk drives (HDDs) in some applications eg in laptops. Even in desktops most serious users with have an SSD as a ‘boot drive’. In the cloud datacentres companies such as Amazon offer servers with 20+ cores – to feed such beasts you need fast access to data and that is where SSDs shine.
In general the way to consider SSDs are that they are far faster than HDDs but also much more expensive (say 5 – 10 x or more). Also SSDs do not really offer the large capacities that HDDs do at anywhere near the same price.
However it is important to note that as prices fall the addressable market for SSDs expands – but the reverse also holds – last year rising NAND prices and shortages led to a fall of market share of SSDs in laptops (moved to 40% from circa 50%). (At least one analyst considered it bearish for the memory sector that the market share of NAND in laptops shrunk – I think it is bullish as it implies that demand elsewhere was limiting availability.)
As a general rule of thumb in a ‘hyperscale’ cloud data centre about 5 – 10% of all storage will be on SSDs – the rest will be on HDDs – so there is a lot of potential cannibalisation if SSDs fall in price.
There is also a dirty secret regarding both SSDs and HDDs that needs to be remembered – neither is perfect. SSDs wear out – it might be hard to believe with respect to silicon but the storage structure actually wears out so SSDs have ‘wear levelling algorithms’ and do other clever tricks to minimise this. Nonetheless a SSD that is thrashed every day in a datacentre or even a high performance desktop PC will wear out. Similarly HDDs have failures (datasheets for drives always show a MTBF = mean time before failure) – so it is typical to put HDDs in critical applications in arrays or otherwise introduce error correction.
If we go back 20 years say, there were numerous (say 20) memory companies, all supplying a similar product (DRAM) to the same customers – essentially PC makers. At the time there were only five major PC makers – IBM, Compaq, Dell, HP, Gateway (some called them the ‘5 horsemen’).
When there are 20 plus suppliers chasing five customers and the supply is in surplus with new capacity coming on every day, and the customer has low switching cost it does not require a tier 1 bank’s analyst to conclude that the producers have limited or zero pricing power. And indeed that is what would happen – prices would crash, the industry would make losses, underinvest, there would be shortages, prices would rise, there would be a boom, massive new investment and prices would crash again. The industry was CYCLICAL.
However today the supplier / customer relationship has changed. Today three companies (Samsung, SK Hynix and Micron) have 90 – 95% market share of the global DRAM market.
For those thinking it is a misprint let me put it another way – THREE PLAYERS now control the DRAM industry – no ifs, no buts. This has come about by players dropping out and others being consolidated – indeed Micron has been one of the biggest consolidators (acquiring Inotera and Elpida). Given the significant consumption of DRAM and NAND by Samsung internally (for its own handsets and other devices) the supply / demand relationship is even more skewed in favour of suppliers.
In terms of customers the PC industry is now roughly 20 – 25% of the output. The other major customer segments are ‘datacentres / servers’, mobile phones and then a whole variety of smaller areas including automotive and set-top boxes.
I am more circumspect that most in focussing on exact market shares for different end markets as some are fungible – eg if I have a box under my desk with two xeon processors is that counted as a PC or a server? Similarly is a iPad Pro (+ keyboard) counted as mobile or desktop?
Though mobiles are roughly about quarter of all DRAM demand it is surprising how relatively little DRAM is in a modern iPhone – eg a iPhone X has 3GB of DRAM. Indeed there have been persistent rumours that Apple has wanted to up the DRAM to 4GB but not found enough availability. Clearly the largest two players in the handset market are Samsung and Apple – but there are numerous other smaller brands – especially on the Android platform.
It may seem odd to separate the ‘hyperscale cloud’ companies – but these companies tend to specify their own computer designs and often have them made specifically for them by ODMs (original design manufacturers) in the Far East. The BAT MAN FANG companies (Baidu, Alibaba, Tencent, Microsoft, Apple, nVidia, Facebook, Amazon, Netflix and Google) are the biggest direct or indirect buyers of servers on the planet for their own use. (Actually, I have taken a slight liberty as nVidia is a hardware provider for machine learning; and Netflix uses AWS to a significant extent – but we felt it is easier to remember this way).
We could also add companies such as IBM and Rackspace which have significant cloud infrastructure. In addition, there will be parts of governments which are direct or indirect users of large data centres.
(The following article mentions that there are 400 hyperscale data centers globally - https://www.networkworld.com/article/3247775/servers/facebook-and-amazon-are-causing-a-memory-shortage.html )
It is worth taking a brief look at automotive. As cars get more sophisticated – eg sat nav, autonomous driving, more electronics – there is more computing power within a car and that inevitably means more memory (both DRAM and NAND).
KEYPOINT 1: Supply concentration vs customer diversification
So we have moved from 20 years ago to a situation now where there are THREE major suppliers and numerous customers spread over several industries (PCs, cloud, mobile, auto, set-top boxes etc) ie the industry now has some PRICING POWER.
One of the most important KPIs in the industry is ‘bit growth’ ie the increase in bits shipped this year or this month compared to last year or last month. As a reminder there are 8 bits in a byte (don’t laugh but even major bank tech analysts have got that wrong on public conference calls). As a general rule ‘demand’ grows by 20 – 30% per year. In recent years the impact of the cloud has pushed it up to 35% or even more.
The hyperscale companies do not publish data on how much memory they are consuming or their future plans. There is currently an arms race between different hyperscale cloud services providers to sign up customers – under the belief that switching costs mean that users will stay loyal. We think that capex spend is a useful proxy – however the announced capex plans of the major hyperscale companies suggest an increase of spend of over 50% in 2018 yoy. Admittedly this includes both computers and offices and other assets but on any way we cut the data we have to conclude that there is an underestimate of potential spend by hyperscale companies.
Since there is no hard data many analysts are, we believe, underestimating the bit growth from hyperscale customers.
As an aside servers often have ECC (error correction code) memory – this requires extra overhead on the chips to ensure that there are no memory errors during operations. This means that a square centimetre of silicon wafer will yield less effective memory bits because of the overhead of the ECC. You might wonder why this matters – because AWS (Amazon) (and I am sure most other hyperscale cloud companies) use ECC. Given that hyperscale has been the biggest driver of DRAM missed by the market the impact of ECC is important.
Roughly, for a 64 bit ‘word’ ECC memory might require 7 bits for error correction, ie there is roughly a 10% overhead (=7/(64+7) ). So if DRAM bit demand for hyperscale cloud providers is growing roughly 50% we would need to add an additional 5% demand to allow for the ECC overhead.
DRAM manufacturers are able to charge a premium for ECC memory. Clearly the pricing for Google or Amazon is different than for retail – but it is not unknown for retail prices for ECC memory to be 100% higher than for non-ECC memory.
(Search for ‘ECC’ on the following page for Amazon’s usage of ECC: https://aws.amazon.com/ec2/faqs/ )
KEYPOINT 2: Hyperscale and ECC
The sellside appears to be ignoring the growth in demand from hyperscale cloud companies, their requirements for ECC and the ECC ‘overhead’ plus the premium pricing.
An extraordinary paper, co-authored by Google, revealed that DRAM errors are much more frequent than generally realised and that DRAM appears to start deteriorating (at least in Google’s data centres) after 10 months. Further they report error rates which are ‘orders of magnitude higher than previously reported’; and that 8% of all DIMMs and 1/3 of all machines are affected in a single year. Additionally, Google found error rates increased with machine utilisation.
The implications of this paper are extraordinary. It mentions that Google replaces memory that starts showing errors. I have never seen any sell-side research discuss the need to replace memory in situ this frequently and it adds incremental demand for ECC memory. I think it is reasonable to assume the other hyperscale companies have similar procedures.
(Google co-authored DIMM error rates is here - http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf )
KEYPOINT 3: Replacement cycle
There may be hidden demand from the replacement of failing server memory which could be occurring much more frequently than the market realises. Roughly I estimate this has a 1% impact on overall ‘effective’ DRAM demand (ie assume that high utilisation server memory is about 10% of global memory – and that 8% is replaced per year – with a 10% overhead as discussed above – so 10% x 8% x 1.1).
In the Q2 2018 Earnings call, Sanjay Mehrota stated:
‘And if you look at trends, when we project that 2017, about 145 gigabytes [DRAM] per server going to about 350 gigabytes per server by 2021. Similarly, if you look at flash storage, 1.5 terabyte average in 2017 growing to something like 6 terabyte average with each server by 2021 timeframe.’
For reference consumer PCs are being offered with 8GB of DRAM and either zero or 256GB of flash. The growth rate for DRAM that Sanjay implies is 25% compounded. (And 41% for NAND).
In comparison I am advising family members to buy laptops with 16Gb of DRAM, and desktops with 32GB (in both cases with 1TB of flash). It is probable that high-end laptops (ie laptop workstations and gaming laptops) will reach 32GB by the end of 2019 depending on when Intel produces Cannon Lake (see https://9to5mac.com/2017/09/20/2018-macbook-pro-cpu/ ) (32 GB laptops need LPDRAM support to that capacity that Cannon Lake will provide).
The NetworkWorld article referenced earlier mentions that Amazon alone is believed to have bought 250,000 servers in one quarter. With 15 times more memory than a typical desktop one can see how important servers are in driving memory demand.
[For reference the following page suggest that the typical PC has about 7GB of RAM - https://techtalk.pcpitstop.com/research-charts-memory/ ; there are roughly 262M new PCs shipped per year (see https://www.gartner.com/newsroom/id/3844572 ) . It does not take much maths to conclude that server memory could soon be a bigger part of the market than PC memory. ]
Keypoint 4: Micron is exposed to server memory and it is driving profits
The following article estimate that 30% of Micron’s product mix is server memory. Given my comments above it is important to recognise that this is likely to be high margin business. (see https://epsnews.com/2017/11/30/dram-sales-driven-mobile-server-markets/ )
The industry is continuously working on ‘shrink’ ie putting more transistors onto less and less space (ie higher density memories) which leads to a falling cost per bit but also more bits per square centimetre of silicon wafer. The key point to note from this is that the industry roadmap is designed to lead to falling prices and increasing bit growth. So when analysts get excited about increased supply and falling prices you have to wonder if they understand the industry – what really matters is whether prices are falling slower or faster than costs.
KEYPOINT 5: The rate of shrink is slowing down and becoming harder.
When the industry moved from eg 35nm to 28nm feature sizes there was a substantial increase in the amount of transistors you could put in a square centimetre of wafer. However, the move from 21 nm to 19nm (say) is a smaller percentage change than from 35nm to 28nm. And the transition from one size (know as a ‘node’) to another is also taking longer due to increased technical difficulties. But it is harder for everyone – and this is impacting supply. (Yields are initially low at new nodes, but as time has progressed yield stabilization is taking longer).
As the feature size for memory gets smaller companies are moving from a physical world to a quantum world. That may sound an odd statement but the point is that parts of chips are becoming so small that they are being designed at atomic levels. Traditionally light was used for transferring a pattern from a ‘mask’ to the surface of a wafer – at some point in the next few years the industry will start to use EUV (extreme ultra-violet) machines for ‘critical layers’. These machines are unbelievably expensive ($120 - 250M per machine), complicated to run and have long lead times (there is only one supplier - ASML).
And, perhaps even more important, EUV machines take a lot of space in a fab, rarely work continuously 24x 7 and have lower wafer throughput than the lithography machines they displace at previous nodes.
Even outside of EUV, the lead times for lithography equipment and other critical components for the semiconductor industry limit the speed of new capacity expansion.
KEYPOINT 6: There are pending equipment shortages.
The equipment and know how for leading edge DRAM is in short supply and the throughput is lower per dollar invested or per sqft of fab space. (There are stories of Chinese manufacturers offering $1M plus salaries for experienced semiconductor engineers).
Historically every few years the industry also moved to a larger wafer size – given that the equipment took up roughly the same footprint in a fab the industry increased bit growth in this manner – so over the years we had transitions from 4” (100mm) wafers to 5” (130mm) to 6” (150mm) to 8” (200mm) and then to 12” (300mm). We were meant, around 2012-2015 to transition to 18” (450mm) wafers – but so far we have not (and it looks unlikely we will in the foreseeable future).
So consider this – in the past we could put new equipment in the same fab, with the same floorspace, and substantially increase output (ie increase bit growth), by going to larger wafers. The larger wafers also meant lower cost per bit. The lack of move to 450mm has stopped that progression.
KEYPOINT 7: The missing wafer size increases shortages
The missing transition to 450mm wafers is being overlooked by many people. It means that the increase in supply (and fall in cost) from a larger wafer size has NOT happened which has limited supply.
There is very limited analyst attention to the raw materials used to make chips. Silicon wafers, though similar to the wafers used for eg solar panels have a completely different level of purity (99.999% pure), are ‘doped’ with specific ingredients and have very high quality control (they have to be precisely machined to be exactly flat). There are five major players in the wafer market – Shin Etsu (31% market share), Sumco (28%), LG Siltron (16%), Siltronic (15%) and GlobalWafers (10%) (source slide 4 here - https://www.siltronic.com/fileadmin/investorrelations/Pr%C3%A4sentation/Siltronic_Fact_Book_2017.pdf - based on 2016 annual reports).
The 300mm wafer market is however even more concentrated with Shin Etsu, Sumco, Siltronic and LG Siltron (in that order) being the major players.
Each semiconductor fab will have its preferred wafer supplier who has its own ‘magic recipe’ to produce wafers to the specifications of that customer. Often the magic recipe is local to a very specific wafer plant. Note that specifications might vary depending on the specific product that the fab is making at that time.
Thus switching wafer supplies is rarely done. And wafer suppliers in turn have specialised equipment with long lead times. Presentations by the major wafer players suggest shortages at least into late 2019 if not longer (some are taking orders for 2020). (See slide 10 of the following http://v4.eir-parts.net/v4Contents/View.aspx?cat=tdnet&sid=1582195 - also note that SUMCO ignores some Chinese customers so is potentially underestimating demand!)
KEYPOINT 8: 300mm Wafer shortages
There is currently a shortage of 300mm wafers with non-strategic customers being turned away. Prices of wafers have risen dramatically over the last year and there are signs that they will continue to rise (see industry coverage). Most analyst coverage of future DRAM supply has ignored that there may be a shortage of wafers which will limit new supply of DRAM.
(It should be noted however that wafer costs are typically only 1 – 2% of the selling price of a DRAM chip so the DRAM manufacturers are able to absorb the price increases –they have long term relationships which ensure supply).
In the old days life was easy – DRAM was essentially a commodity and one piece of DRAM from one supplier would match a piece of DRAM from another supplier and either could be used in an end-product (ie a PC).
Nowadays things are a tad more complex. There are a lot more variants of memory.- for instance DDR2,3, 4, graphics memory , ECC memory (error correction code) or mobile RAM (which may be low powered).
Also memory is sold with different speeds and other specifications (eg CL= column access strobe latency). The best way to appreciate this is to peruse the following page from Crucial where at the time of writing there are 421 parts listed - http://crucial.com/usa/en/memory?cm_re=us-top-nav-_-us-flyout-memory-_-us-memory-all
The different variants of memory are made in batches and a fab can optimise its mix to what orders it has and profitability.
In the mobile space DRAM is often packaged into a single unit (=eMCP) with other types of memory (eg NAND) and custom built to a specific customer. This means that those manufacturers who manufacture both DRAM and NAND and can package them together have a competitive advantage. (See for instance - https://www.micron.com/products/multichip-packages/emmc-based-mcp )
KEYPOINT 9: Commodity vs differentiation
All memory is not the same and lots of variations mean that it is not a pure commodity.
Where possible a DRAM fab wants to produce to specific customer demand. And when supply is tight customers are more interested in security of supply than solely price. Some customers, eg automotive, have long product cycles and so need guarantees of supply for several years and so sign contracts on that basis. Others, eg Apple are of such scale that they need guarantee of supply because of their scale and also need customisation to their specification.
Even in the PC market the major customers (eg HP Inc (HPQ) and DELL) will sign contracts to guarantee supply.
KEYPOINT 10: Spot market is only a small and increasingly irrelevant part of the market
The ‘spot’ market for DRAM reflects that supply which was not sold on contract or which was surplus to requirements for major players who did sign contracts. It is a very small part of the overall market Relatively small orders, in the context of the industry, can move the spot price substantially. The main memory sold in the ‘spot’ market is relatively vanilla PC memory – and does not necessarily reflect prices of more specialised memory. Furthermore there is no true ‘central exchange’ or clearing point for DRAM – specifically it is NOT analogous to the crude oil market – yet most analysts assume it is.
Thus memory that appears in the spot market is the ‘residual’ after contracts have been filled. As more large customers sign contracts to ensure supply the spot market will get tighter.
Even when a fab is producing with stabilised yields not all the memory chips on one wafer, or between batches perform the same way. Hence in the test phase of manufacturing each chip on a wafer (a wafer might have hundreds of chips) is individually tested and then ‘binned’ ie put into specific lines dependent on whether it works and how well (ie what speed it works at).
Some memory companies are more rigorous in this than others. Some third party merchant vendors (who might buy complete wafers or lower performing ‘binned’ chips) have been know to ‘retest’ and relabel them at higher specifications. Furthermore there is often mixing of products from different suppliers / batches in packages of memory purchased in the ‘spot’ (or merchant) market.
Hence the ‘spot market’ can be a ‘caveat emptor’ market.
Keypoint 11: The Spot Market and quality
The quality of memory purchased in the spot market can be ‘variable’ with some players ‘re-testing’ or ‘upgrading’ memory that has been classified at a lower rating by the major players.
When a DRAM fab introduces a new node or a new product variant it has to ‘stabilise’ the yield ie make lots of minor tweaks to try to improve the good to bad chip ratio. As part of that the fab will run ‘test’ wafers to help calibrate the equipment. The test wafers can represent 5 – 10% of all the wafers a fab (or at least some of the machines in the fab) process – obviously more in the ramp up phase. As we approach more technologically challenging issues the percentage of test wafers increase. This effectively removes some fab capacity.
If a fab changes batch, eg moving from DDR4 to ECC say, it will also increase its test wafers whilst it gets to stabile yields with the new product. Thus the more varieties of memory there are the more test wafers there will be.
(It should be noted that there is a whole sub-industry in re-cycling test wafers).
KEYPOINT 12:Test Wafers
The increase in complexity and product SKUs means more test wafers which reduces the effective throughput of fabs. Many analyst models are incorrect as they do not adjust for test wafers.
In recent years Intel has introduced processors with more cores. Many readers with be familiar with the i3, i5 and i7 processors which have 2 to 4 cores (or upto 8 with virtualisation). The new Intel i9 has 10 cores (or 20 with virtualisation – actually the i9-7980XE goes upto 18 core (or 36 with virtualisation)). What many readers will be less familiar with is the Xeon series which is used in workstations and compute engines in the cloud – these processors can have as many as 72 cores (eg Intel Xeon Phi 72290F - https://www.intel.com/content/www/us/en/products/processors/xeon-phi/xeon-phi-processors/7290f.html )
Though there is no obligatory direct relationship between the number of cores and the amount of memory as a general rule there is no point in having multiple cores unless you have data you can process and deliver fast enough to the cores – which in turn means that the more cores the more DRAM ie you need to ‘FEED THE BEAST’.
And artificial intelligence, machine learning or even everyday cloud services (eg email) can burn through the cores.
KEYPOINT 13: Feed the beast
More cores leads to more memory.
DRAM fabs are almost, but not quite single use. With some re-configuration (think 1 – 3 months and an expenditure of several tens to hundreds of millions or more) the fab can be reconfigured to produce NAND flash or image sensors.
Samsung in particular has been known to switch capacity away from DRAM to NAND or image sensors depending on demand and profitability. Though this is on the edges it is important to appreciate that increased demand for NAND can thus lead to tightening supply of DRAM. Indeed in the autumn of 2016 Samsung switched some capacity in line 16 to NAND due to strong demand.
Going forward, IOT and automotive sensors are likely to drive increasing demand for image sensors (ie camera devices) so I will not be surprised if some (especially older) DRAM fabs are switched to this. This will tighten up demand.
KEYPOINT 14: Fab Switching
The ability to switch from DRAM to NAND or image sensors allows the opportunity to capture more overall profit for the memory industry. But switching also takes capacity away. (This also has an impact on the economic life of fabs - it may be longer than the accounting life).
Whether we talk about DRAM or NAND fabs one needs to appreciate their scale and specifications. In 2015 EE Times reported that the most expensive fab in the world was Samsung’s Pyeongtaek fab at $14 billion and that it would employ 150,000 people. (See https://www.eetimes.com/document.asp?doc_id=1326565 ) The latest Samsung fab (P2) will cost $27.8 billion (see https://www.anandtech.com/show/12498/samsung-preps-to-build-another-multibillion-dollar-memory-fab-near-pyeongtaek ). Such fabs are planned and built over several years and then filled with equipment in stages. Given the size and scale of investment required to build a competitive DRAM or NAND fab there is no ‘surprise’ supply which suddenly appears. Similarly given the size of the expense manufacturers get indications of demand (if not signed contracts) before they invest ‘speculatively’.
KEYPOINT 15: Fabs take time to build and ramp up
Be aware of analysts claiming for sudden new supply arriving. The industry will be aware years in advance.
PUTTING IT ALL TOGETHER
DRAM fabs run 24 x 7 but given the limited number of players and the large number of customers; plus the rapid growth of new areas (mobile and then cloud and also automotive) the industry has pricing power. The high differentiation of products and long term contracts means that it is not a true ‘commodity’.
As discussed earlier NAND can be considered a replacement for hard disk drives. However due to its size and resilience it has allowed slimmer devices (eg MacBook Air, HP Spectre x360, iPads). Again this is a consolidated industry with 6 major players (ie combined market share of over 90%) – Samsung, Toshiba, Western Digital, Micron, SK Hynix and Intel.
Six players may not seem that concentrated but there are a few nuances – Toshiba and Western Digital are in a joint venture. To complicate matters further Toshiba has a pending sale of part of its share to a consortium comprising Bain, Hoya, SK Hynix, Apple, Seagate and Dell and Kingston.
Micron and Intel have had a joint venture in flash (though the relationship is ceasing with the next generation).
Thus the 6 major players really represent four groupings. (Note that we are counting SK Hynix as a standalone as it is a firewalled financial investor in the Toshiba / Western Digital jv).
KEYPOINT 16: The NAND industry is also highly consolidated
As with DRAM the NAND customer base extends across the PC, cloud, mobile, automotive, set-top box and IOT markets with numerous customers. In addition everyone is familiar with USB ‘thumbdrives’ and memory cards. The key point about such devices is that they help take any excess supply out the spot market rapidly – and the no-brand ‘packagers’ of such devices act as a ‘buffer’ for the industry.
KEYPOINT 17: As with the DRAM industry the customer base is substantially diversified.
The original iphone offered 4GB of storage; the top of the current iPhone range offers 256GB.
I want to take an aside to discuss the HP Spectre x360. Historically PC makers had built desktops and laptops to a price. Apple, with the launch of its MacBooks and iMacs showed that customers were willing to pay for a premium product but it was ignored by much of the industry which assumed it was an Apple aura effect. With the Spectre x360 HP has shown that customers are willing to pay for premium PC based laptops – indeed the Spectre x360 is now available with 16GB of RAM and 1TB of SSD. I will defer from mentioning a price as it is continuously moving but it is worth looking it up. Similarly Microsoft has shown the same with its Surface Book range.
2017 was an unusual year as SSD penetration of laptops fell (from 50% to 40%) as price rose (a lot of PC makers still build to price). If NAND pricing falls in 2018 I anticipate that SSD will continue to grow market share in the laptop market. The desktop market will follow closely behind albeit from a lower starting point.
Cloud based servers are available with lots of DRAM and NAND based storage. For instance the Amazon AWS i3.16xlarge configuration comes with 488GB of DRAM and 15.2TB of SSD (see https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/ ) [It is also worth noting that Amazon charges $4.99 / hour for this configuration – at full utilisation it would generate $43,712 in a year – which explains why cloud providers are relatively price insensitive.]
The growth rate of NAND storage over the last decade has probably been even faster than for DRAM and it looks like it will continue. (See quote earlier from Sanjay Mehrota)
For portable devices where physical space is at a premium there is a clear preference for SSDs over HDDs (imagine a hard disk drive in your iphone). In high end PCs and data centres SSDs provide faster performance than hard disk drives.
The key question to ask for NAND demand is price – as price falls demand increases dramatically. Who would not want a 1TB iphone if it was the same price as a current phone; or a 10TB Spectre x360?
In datacentres as well as performance, for data that is continuously accessed (so HDD are continuously spinning) there are various analyses which suggest SSDs are viable replacements on a TCO (total cost of ownership) basis once one considers the power consumption over five years and also the physical space. It is always possible to tweak these claims for specific cases – but the point is that SSDs are on the cusp of substantially more demand as supply becomes available.
KEYPOINT 18: NAND demand is price sensitive
NAND demand continues to grow dramatically. Projecting NAND demand is a certain number of units is nonsense – the key question is how much can the market absorb at a specific price. Some NAND demand is a ‘must have’ and some is a ‘nice to have’.
In order to increase the amount of storage in a chip (and reduce cost per bit) the NAND industry has moved from 2-D to 3-D chips – ie putting multiple layers on a single piece of silicon. The latest chips have 64 layers; but the industry is in transition with a lot of production still at 32 layers.
The move to increasing layers has not been without complexity. One of the reasons analysts over-estimated supply was that they assumed all companies would move all their production together instantly. Instead the transition has taken the best part of 2 years.
Some of the industry are starting to design and / or test 72, 96 and even 128 layers. However it is important to note that a 32 layer chip does not have precisely 32 times as much capacity as a single layer chip which covers the same square centimetres of a silicon wafer. Connections between layers take up space – a bit like stairs and lifts take up floorspace in a skyscraper. And just like a skyscraper the taller the stack the greater the percentage of the space that is dedicated to interconnects. This ‘lost space’ increases with increasing number of layers.
I think it is reasonable to assume that transitions to even more layers will be done slowly.
KEYPOINT 19: Additional layers do not scale portionally
Additional layers, like a skyscraper, lead to less ‘floorspace’ per additional layer.
A chip can only be fully tested once it is complete. The more working surface area and layers the more the probability of a defect. One of the major issues with 3-D NAND has been the difficulty in achieving high stabilised yields (known as ‘mature yields’ in the industry). It can take months to stabilise yields.
KEYPOINT 20: Yield Stabilisation
Yield stabilisation is not immediate on moving to 3D – and with increasing layers this is likely to continue to be an issue.
Micron believes that with 64 layer 3D NAND it is now cost competitive with the other major players. In fact it argues it might even have a competitive advantage because of its design – it puts some of the control logic ‘under’ the NAND layers (in the ‘basement’ to use the skyscraper analogue) as opposed to adjacent to the NAND layers (in an adjoining building). The result of this is that Micron’s design uses slightly less silicon area.
So now we get technical! In DRAM we discussed the (potential) use of EUV for critical layers. Currently NAND is produced using traditional optical lithography. However if we go back to our skyscraper analogue we can think of a 3-D NAND memory chip as being very tall. One difference between building a memory chip and a skyscraper is that we need to drill from the top through the skyscraper once it is completed to provide some of the interconnections between layers. This means making small but very deep holes – or high aspect ratio etch steps. Such steps take time – some estimates suggest 30 – 60 minutes per etch (eg see https://www.semiwiki.com/forum/content/6871-3d-nand-myths-realities.html ).
(Quick reminder: Lithography places the pattern on the surface and then the chip is usually plasma etched).
KEYPOINT 21: Etch times impact throughput
Building a 3-D 32 or 64 layer structure takes time, and equipment so the capex per wafer processed has gone up; and the throughput of a fab is less than the 'faceplate' capacity.
All the issues we discussed about the availability of silicon wafers for DRAM applies to NAND as well.
One of the bear cases for the memory sector is the entry of the Chinese – particularly YMTC (Yangtze River Storage Technology).
However some of the coverage has confused the exact scale of the market entry and sometimes which companies are entering the DRAM market versus the NAND market. Here I will concentrate on NAND but similar issues to those discussed below apply to DRAM.
The indications are that the Chinese manufacturers are at the pilot line stage and sending test chips to customers for qualifications (see discussion on this topic in the ASML Q17 Earnings call – https://finance.yahoo.com/news/edited-transcript-asml-earnings-conference-210530995.html ). To go from here to full capacity will require a period of ramp up – measured in years.
We estimate that full capacity (with stabilised yields) of the Chinese fabs will not happen until 2020 – 2021. (Remember that equipment has to be bought, installed and then yields stabilised).
Keypoint 22: Ramp up
The ramp up of the Chinese fabs will take time – it is not going to happen overnight.
Even the most advanced Chinese NAND producer (YMTC) has said it will only reach full production volumes of 32 layer in the 2020 – 2021 timeframe. From our above discussion it will be noted that the leading edge established fabs are likely to be on 72- 96 layers by then. Thus for a single wafer the Chinese will have less than half the number of bits output. When one also includes the Chinese probably being 1 – 2 line shrinks behind it means that the effective output from a YMTC fab processing 130,000 wafers per month is likely to be somewhere between 30 – 50% of that of an equivalent Samsung or Micron fab – ie those 130,000 wafers will be equivalent to 39,000 to 65,000 wpm from a leading edge fab.
Keypoint 23: All wafers per month are not the same
Chinese fabs are likely to have substantially less bits per wafer – and hence some analysts are substantially over-estimating new entrant supply.
The major memory companies have thousands of patents. It is not clear what IP governance the Chinese companies are under. This will be a major issue for any end OEM or ODM who wishes to export product containing Chinese memory chips to the USA.
In addition the ongoing push for China to respect US IP (see https://ustr.gov/about-us/policy-offices/press-office/press-releases/2018/march/following-president-trump%E2%80%99s-section ) is likely to put further pressure on China.
Finally there is also a non-zero chance that a US court might find equipment companies supplying the Chinese fabs as having joint or secondary liability (see for instance this article - http://www.klemchuk.com/secondary-liability-for-trademark-infringement/ - especially the paragraphs discussing Louis Vuitton – admittedly this relates to trademark rather than patents; also see the following re Germany - https://united-kingdom.taylorwessing.com/en/insights/transceiver/liability-of-foreign-suppliers-for-their-customer-s-patent-infringement-in-germany ).
It should also be remembered that the US controls critical parts of know-how related to semi-conductor manufacturing and some parts are dual use (ie military and civilian). Historically the US used to be much stricter on exports of dual use equipment – there is a chance that under the present President such a stance is re-visited.
Keypoint 24: Intellectual Property Liability
IP risk for the Chinese memory manufacturers should not be ignored and will restrict their sales.
Historically Micron produced wafers (whether it be for DRAM or NAND) and then sold them to others to package and sell on. Under Sanjay Mehrota the company has increased its focus on providing ‘solutions’. In the case of the NAND market this means not selling ‘NAND’ but rather selling SSDs (an SSD includes ‘controller’ logic as well as NAND memory). In the Q2 2018 conference call management made several mentions of becoming a ‘solution provider’ and ‘value add products’.
Clearly this improves Micron’s profits per wafer but there is also a secondary impact that we believe the market is ignoring. As Micron focusses on going down the value chain it will supply less to the ‘trade’ ie companies such as Kingston. I think it is reasonable to assume that this will then impact demand of smaller ‘trade’ players in the spot market.
It will also be noted that Kingston is a member of the consortium buying into the Toshiba memory business. If that transaction fails (there is speculation of failure in the Japanese press) this might increase the pressure on ‘trade’ customers to source memory.
Additionally Micron discussed in the Q2 2018 call the cost of ‘qualifying’ product and the cost of 3-D XPoint development. ‘Qualification’ is likely to mean that the company is hoping to win some significant contracts. As for 3-D XPoint – though the jury is still out on this there is significant reason to believe that it will serve specific pain points for hyperscale companies.
Keypoint 25: Micron is moving through the value chain.
Moving into value added products ensures Micron’s future profitability but also withdraws supply to the spot and trade / merchant market.
The following is a speculative statement on my part -hence I have left it until last. Let me begin with some comments Micron made in its last earnings call (22 March 2018):
‘To support these data-intensive capabilities, flagship and high-end smartphones are migrating toward 6-gigabytes of LPDRAM, a trend that bodes well for Micron given our leadership in LPDRAM power efficiency, which is essential for optimizing battery life.
Average storage densities are also increasing across all smartphone classes, with new flagship models using 64 gigabytes of flash memory as a minimum. Micron's portfolio of managed NAND solutions is well suited to address this growing demand, and we are leading the industry in TLC utilization with a portfolio that leverages the strong attributes of our 3D NAND technology.’
Given the cost of qualification we speculate that Micron may be working on a major contract win. The comments on LPDRAM and NAND plus Micron’s focus on value-add and packaging make us wonder if Micron is working on a major win for eMCP.
We note that Apple has been trying to decrease its exposure to Samsung; furthermore there is market speculation that Apple may be planning on launching several new phone models.
If the comment by Micron regarding 6GB LPDRAM for flagship smartphones is correct – and similar increases occur for other mobile phones then that alone will lead to DRAM shortages.
I should emphasise that I am not saying that prices will stay up. The key point is that shrink and increasing layers drive down costs. Historically memory prices (as a rule of thumb) fell 15 – 17% per year – and costs fell around the same amount. As long as prices, from this point, fall only at the same rate as costs, then margins will stay flat. Given the points discussed above, which appear not to have been covered by much of the market, I am more bullish on supply – demand than most sell-side analysts. If I am only half right prices will not fall as rapidly as the market anticipates and that means that Micron will maintain its margins for longer than anticipated. At 5x PE that is all that matters for any analysis.
I hope I have shown that the supply / demand relationships for the memory industry has changed – with a lot of supply concentration and a diversified customer base. The complexities of smaller nodes (for DRAM) and increasing layers for NAND mean that stabilised yields are taking longer to achieve. And there is a substantial difference between ‘faceplate’ production capacity and actual effective capacity which will be relevant to the new Chinese entrants.
The numerous memory variants mean that memory markets are becoming more differentiated and less commoditised. And the move of Micron into value add (eg from NAND into SSD) and into packaging (ie combining NAND and DRAM) means less product available for the ‘trade’ (/merchant) and spot markets – this is going to continue the squeeze in the spot market.
Furthermore the growing shortage of 300mm wafer supply (and the long lead time for equipment) constrains over-expansion by the industry.
If I am even barely correct on only a few of the key points I discuss above then the sell-side has modelled the future supply / demand for memory incorrectly and Micron will continue to generate attractive cashflows and be much much less cyclical than the market is assuming.
Continuing demand for DRAM and NAND - greater than expected by the market. Possible major eMCP contract win.
1 show sort by |
Are you sure you want to close this position Micron Technologies?
By closing position, I’m notifying VIC Members that at today’s market price, I no longer am recommending this position.
Are you sure you want to Flag this idea Micron Technologies for removal?
Flagging an idea indicates that the idea does not meet the standards of the club and you believe it should be removed from the site. Once a threshold has been reached the idea will be removed.
You currently do not have message posting privilages, there are 1 way you can get the privilage.
Apply for or reactivate your full membership
You can apply for full membership by submitting an investment idea of your own. Or if you are in reactivation status, you need to reactivate your full membership.
What is wrong with message, "".