2024 | 2025 | ||||||
Price: | 53.60 | EPS | 0 | 0 | |||
Shares Out. (in M): | 152 | P/E | 0 | 0 | |||
Market Cap (in $M): | 8,137 | P/FCF | 0 | 0 | |||
Net Debt (in $M): | 6,211 | EBIT | 0 | 0 | |||
TEV (in $M): | 14,349 | TEV/EBIT | 0 | 0 |
Sign up for free guest access to view investment idea with a 45 days delay.
Long: Coherent Corp. (COHR)
Let there be light: On the critical optical connectivity bottleneck within GPU clusters
I finally got around to finishing my investment thesis focused on optics; it complements a thesis on copper (AECs) posted over the weekend by agentcooper2120 (great article, thank you!)
I underwrite Coherent (COHR) --- a maker of optical components --- for a 20%+ IRR for a 3-year holding period given the secular adoption trends in GPU clusters. I discuss in detail some AI/non-AI risks and mitigants.
Rationale:
As the old physics joke goes, God chanced upon Maxwell’s Equations (fundamental laws of electro-magnetism), and then proclaimed, ‘Let there be light.’ In the context of the current data center infrastructure spend cycle, that saying takes on a significant meaning as we grapple with the challenges of an interconnect bandwidth wall --- an industry term that highlights the issue of data movement being slower than compute speeds in GPUs. The bandwidth wall issue ties back to two related concepts I discussed last year in my Apr 2023 ‘long’ NVDA pitch:
Amdahl’s law postulates that performance is limited by the slowest link in the system. To illustrate, imagine you're Usain Bolt rushing to catch a flight. Even with your speed, if the slowest walker in your group can't keep up, the whole group won't make it to the gate on time.
Memory wall --- an industry term that underscores how memory scaling has lagged compute scaling in GPUs over the last several decades. In handling tasks such as LLM training or inference, GPUs can stay idle for up to ~50% of the time while waiting for data transfer between memory and processor.
Essentially, the bullet trains that GPUs are, they should not get slowed down by tracks set by the connectivity stack. As the interconnect scaling cannot keep up with model sizes and underlying processor growth, we are critically dependent on increasing the number of interconnects. While copper remains effective over short ranges for scale-up (i.e. increasing performance of a single server node) of GPUs, optical connectivity has become indispensable for scale-out (i.e. expanding the number of nodes via clusters). What compounds the challenge is the square law connectivity requirement for GPU clusters – i.e. N GPUs need N^2 connections. This requirement, even with some hardware/software optimization offsets, poses significant scaling challenges for large clusters. In addition, the focus is also on energy efficiency, measured in pico-joules/bit (PJ/b) and unit cost measured in $/Gb of data transferred via the optical components.
Executive Summary:
As a play on the optical connectivity theme, I underwrite Coherent (COHR), a maker of optical components for a 20%+ IRR for the holding period 2024-26E.
I view a bull/bear scenario of ~3.8x+ reward/risk at a relatively undemanding valuation when compared to peer AI-exposed names.
I argue that an improving industry backdrop (broad consolidation resulting in few key suppliers, accelerating pace of innovation), COHR’s vertically integrated model (they make their own lasers), and secular tailwinds in the data center may make this optical cycle less volatile than previous boom-bust phases.
I present a top-down analysis of AI TAM, which comes to $6B+, in line with management’s estimate, assuming a conservative transceiver to GPU attach ratio of 3x. I triangulate this back to training/inference use cases and Marvel’s recent AI infrastructure update.
The valuation upside doesn’t depend on a dramatic improvement in COHR's non-AI industrials-focused segments. A potential monetization of the SiC business, which is part of the non-AI segments of the business, is not baked into my valuation. Mitsubishi Electric and Denso's recent 25% stake assumed for $1B in COHR’s SiC business segment alleviates the heavy capex burden of the business model.
While COHR is a levered optical name (4.3x gross debt/TTM EBITDA, 3.4x net as of Dec 2023), secular tailwinds from Datacom can help reduce net leverage to <2x by FY2025E.
Transceiver 101:
COHR is a leading manufacturer of optical transceivers, which come in different configurations and form factors. Here are some nuggets:
Transceivers are components that convert electrical signals to optical signals, and vice versa, to facilitate high-speed data transfer rates in a data center, e.g., from a server to a switch via optical fiber.
Data rates: Measured in Gigabits per second i.e. Gbps, While your home internet is ~100Mbps, data center ports have rates of 10-400G, with the latest trends pointing to 800G, 1.6T. (Networking and optical industries use bits denoted by b, while data scientists and computer science professionals use bytes denoted by B. The scale factor here: 8 bits / 1 byte)
Data rates/lane: 800G can be implemented using 8x 100G/lane or 4x 200G/lane – you get the drift.
Lasers are key components: they can be VCSELs (short range <50m) or EMLs (500m-10km) made at lagging edge fabs, as shown below in Fig. 1.
Fig. 1: Lasers used in transceivers. Source: Coherent Filings, March 2024
COHR: Quick Overview
1) Coherent today is an optical/materials supplier to the following end markets (note Q4 FY24 ends June 30, 2024)
communications (datacom and telecom networks): 46% of Q2 FY24 revenues
Datacom is ~70% of communications and has AI-related exposure via transceivers e.g 800G
Industrials, e.g. precision manufacturing, semi caps, display (OLED), aero: 37% of Q2 FY24 revenues
auto/consumer electronics: 8% of Q2 FY24 revenues
life sciences and scientific instrumentation: 9% of Q2 FY24 revenues
2) History: Coherent is a product of M&A over the recent years of II-VI (semiconductor materials supplier founded in 1970, name inspired by the Periodic Table):
Finisar (optical transceiver/other component maker for networking, 3D sensing, data storage; Finisar was founded in 1988). II-VI acquired Finisar in 2019. The acquisition was a success (net leverage came down from 4x in Dec 2019 to 1x in Dec 2020), and Finisar/COHR is positioned to benefit from AI/ML secular trends in datacom with a vertically integrated approach and technology leadership.
Coherent (industrial laser maker founded in 1966). II-VI outbid Lumentum and MKS Instruments, and acquired Coherent laser maker in 2022, and subsequently assumed the name ‘Coherent.’
Bain Capital built a stake in II-VI in 2021 to fund its acquisition of Coherent, Inc. Bain Capital now owns $2.4B series B preferred convertible debt with a 5% coupon, (4yr PIK, cash pay option after initial 4yr) at $85 conversion price, amounts to potential dilution by 28M shares. The preferred PIK dividend was guided to be $123M for FY24E. Bain Capital PE has had one board member, Steve Pagliuca, since 2021, with his term expiring in 2024.
3) CEO/CFO needed: Vincent Mattera, who was with II-VI since 2017, announced earlier this year that he would retire, after a 40+ year career in the industry. Prior to that, CFO Mary Jane Raymond was terminated in Sep 2023, less than 6 months into her job. While the company is managed by a seasoned senior bench with clear technical leadership, any updates on these appointments could be positive developments if the right candidates are selected.
4) Business Analysis: As a maker of components such as lasers, detectors, integrated circuits, and passive optical components, Coherent is among the key suppliers to the telecom and datacom industries.
Customers: Fig. 2 shows some customers on telecom and datacom segments across equipment vendors and telecom and cloud service providers. Within datacom, hyperscalers such as Microsoft, Google, Amazon, as well as Meta, are likely buyers of high-speed optical transceivers.
Fig. 2: Optical value chain. Source: A. Weissberger, LightCounting https://techblog.comsoc.org/category/optical-transceivers/
In the latest quarter report Q2 FY24, Coherent recorded 75% of datacom transceiver sales from hyperscalers.
Typical optical spend levels are < 5% of total cloud capex.
NVIDIA is estimated to be about half of AI transceiver demand and reported to outsource optical systems design to contract manufacturer Fabrinet in 2023. Amidst high demand, other suppliers such as Coherent and Innolight may likely be important over the next several years. Google partnered in the past with Lumentum for transceivers. Amazon and Google tend to lead in the adoption of upgrades in ethernet speeds, given their relatively higher in-house expertise relative to Microsoft or Meta.
Competitors: While the current secular trend could be a rising tide lifting most boats, especially key suppliers, here’s a set of peers to consider in the landscape.
Zhongji Innolight (Market Cap ~ $18B): China-based player. Finisar sold itself to II-VI, mainly because it couldn’t keep up with the aggressive Innolight, especially in the data center. Innolight does not manufacture its own VCSEL and other lasers, and depends on external suppliers such as Coherent and Lumentum.
Lumentum (Market Cap ~$3B): It is a small-cap peer of Finisar. Lumentum acquired Oclaro in 2018 and Cloudlight in 2023, and is looking to solidify its presence in the datacenter market.
Fabrinet (Market Cap ~$6B): Contract manufacturer (similar to Flex or Jabil), low margins business but helpful to launch a design. They partner with optical suppliers for key components.
Broadcom (Market Cap ~$623B) is a broad-based semiconductor manufacturer. They also make VCSEL lasers, which are sold to Fabrinet.
Coherent is in a capital and R&D intensive industry (totaling ~10-15% of revenues) with cyclical end markets each with its own boom and bust dynamic. The business cycles are driven by global demand/supply imbalances, including effects of inventory levels across multiple points of the supply chain (including those at component suppliers such as Coherent, equipment makers, and end-customer such as hyperscalers), technological advances (e.g. semi-caps for Coherent’s excimer lasers), increasingly sped-up innovation cycles (with AI leader NVDA shrinking product cycle from 2 years to 1 year), falling unit product costs with cheaper silicon and increasing competition.
How is the boom-bust dynamic different now from a few years ago? The industry has consolidated into a few key players, with Coherent as an example of a merger of three distinct suppliers. In previous years, it was quite easy for competition to replicate the products of cycle leaders. But now product cycles have shrunk, and the bigger got stronger, so much so that it is hard for smaller commoditized players to compete in delivering advanced optical components at scale. For example: Coherent benefits from its vertically integrated business model, deep technology expertise in the optical stack, and manufacturing at scale as it emerged stronger after Covid --- and forging strong partnerships with cloud provider customers and suppliers across global value chains.
Coherent is vertically integrated, making its own lasers and other components for short-, mid-, and long-range. Coherent runs fabs across the world, e.g. China, Malaysia, Sweden, USA. At OFC 2024, COHR announced 200G/lane VCSELs. VCSELs offer the lowest PJ/bit and cost and, hence, are ideal for power-constrained data centers.
Pricing: Note that transceivers ASPs decline rapidly over time from more supply coming online (as more components get qualified), tech deflation as we navigate product cycles and an end-customer base of cost-sensitive hyperscalers. But lower prices should accelerate the pace of adoption, expand market usecases, and help our overall transceiver thesis. This observation is based on Jevon’s paradox, an economics concept that says that declining unit costs spur broader usage, e.g., you drive more if gas gets cheaper.
5) Financial Overview: Some tidbits from the 10K FY23 (ending June 2023):
Standard fare for the most part. 10K has some details on the amortization of intangible assets (customer lists, technology intangible assets, and associated weighted average life), which were used in my projections. COHR has net deferred tax liabilities (after accounting for some federal/state NOLs) increased reflecting a ~$1B impact from recognizing intangible assets from II-VI’s FY2022 acquisition of Coherent Inc. (laser maker) and should reduce as intangible assets get amortized over time.
Investment Thesis:
Coherent is positioned to benefit from the secular adoption of high-speed optical interconnects, which are critical for scaling out GPUs in this multi-year data center infrastructure spending cycle. Typical optical interconnects used in a data center may have 2 optical transceivers (one at each end).
Per a recent Wired Magazine interview [1] with NVIDIA CEO Jensen Huang, Moore’s Law is no longer just about chips but about systems. Memory bandwidth (~TB/s) and interconnect bandwidth (~Gbps) have scaled at a much slower pace when compared to compute FLOPS. Memory wall rises more from the fact that DRAM is a capacitor-based device, and capacitors haven’t scaled well at smaller nodes due to dielectric leakage effects. Interconnect bandwidths are limited by RC (= electrical resistance x capacitance) time delays of copper-based interconnects. AI traffic data rates far exceed that of traditional internet cloud traffic, and data centers are critically dependent on optical interconnects as fundamental building blocks for mid and large-range data connectivity. In the absence of optics, training and inference of large language models are unlikely to be conducted efficiently at scale, further exacerbating the power grid saturation problem seen in some data center corridors. AI model requirements dictate compute and interconnect bandwidth needs.
According to K. Schmidtke's [2] keynote presentation at OFC 2024, scaling trends every two years seem to be: 100x growth for AI models but only 3.3x for compute performance and 1.4x for interconnect bandwidth. This implies a need for "30x more computing" and "70x more interconnects" every two years.
Coherent has over 2 decades of leadership in developing optical transceivers and other components that facilitate interconnectivity between GPU racks and clusters within the data center.
Transceivers are agnostic to Infiniband or Ethernet in traditional pluggables. They also fit into any of the next-generation optical module architectures, and configurations, such as (i) linear pluggable optics, (ii) co-packaged optics, and (iii) linear receive optics along with DSP and other module makers.
COHR has a broad portfolio of products (and a roadmap to build further) for this evolving landscape. It is unique for its vertical integration as it is one of the few that makes its own lasers (thanks to II-VI acquiring Coherent, Inc.).
While a majority of the industry is equipped with 100-400Gbps ports, Dell’Oro Group, an industry research provider predicts a rapid upgrade/ramp in 800Gbps and 1.6Tbps ports for AI network infrastructure. In previous years, port upgrade/refresh were 3-4 year cycles but now it compressed to about half of that.
Fig. 3: Migration to High-Speeds in AI Clusters. Source: S, Boujelbene, Dell’Oro Group blog here
Coherent has specified an estimated 25% CY23-28E CAGR for the optical transceiver market, reaching $15.3B for overall transceivers and $6.6B for those dedicated to AI. Across the industry, per OFC 2024 panels, the AI transceiver TAM could be in the $5-10B range.
Fig 4: Coherent provided a TAM estimate of $6B+ for AI transceivers, which is in line with industry commentary at OFC 2024. ChatGPT and the subsequent broad-based AI/ML adoption likely usher in a multi-year inflection in transceiver demand. Source: Coherent Filings, Feb 2024
Optical Fiber Communications (OFC) Conference update from management in March 2024: guided for $450M for 2H FY24E (ending June 2024) for 800G transceiver revenues (from $150M in 1H FY24E). Management said they have built the requisite capacity for VCSEL/EML laser/other components needed to meet the inflection in fast transceiver demand. COHR talked about product development focused on increasing data rates per lane (communication channel) for transceivers, i.e., 8x100Gbps and 4x200Gbps for 800G and 8x200Gbps for 1.6T ports. This is important to achieve lower cost ($/Gb) and energy efficiency (PJ/b) – a critical value proposition for hyperscaler as well as telecom customers.
Copper has advantages in low cost and low power consumption, as illustrated by NVIDIA at GTC 2024, but it can only work in the short range of <1.5m.
Context: At their annual developer conference, GTC 2024, NVIDIA launched the newest GPU rack: DGX GB200 NVL72, with 72 GPUs connected by NVLink switch at 130TB/s bandwidth, 720PFLOPS training compute for quantization level FP-8, and a 120KW power consumption. About 20kW was saved as direct-attach copper cables (and 200Gbps SerDes technology) were used.
The scale-up happened as the copper allowed for a connectivity bandwidth of 14.4TBps per board (=130/9) for distances < 1.5m. This is a great design for connectivity within the rack.
Fig. 5: NVIDIA’s newest DGX GB200 NVL72 rack has a copper backplane. Source: NVIDIA technolgy blog, Keynote GTC 2024 by CEO Jensen Huang
Copper has physics-based limitations over mid- and long-range, especially at high data rates: high signal loss, lack of immunity to electromagnetic interference, and issues in scalability. At NVIDIA’s GTC 2024, Amphenol had a session on interconnects on which they showed copper cables as short as 1 meter could result in a 40dB (a log-scale metric implying being left with only 0.01% power of an original signal) loss at higher data rates, e.g., 200G.
Not surprisingly, at OFC 2024, NVIDIA essentially had an optics-focused apology tour, calling out that scale-out i.e. building 32000 GPU clusters i.e. 444 (=32k/72) racks with 800 G optical ports at 2x400G/lane)
4. AI data centers are likely to contain increasingly larger numbers of clusters, making interconnectivity bandwidth all the more important. In addition to the rising adoption of high-speed ports, serving higher bandwidth needs also higher attach rates per GPU.
NVIDIA released the reference architecture for DGX H100 superclusters (superpods). The reference architecture has a leaf, spine, and core switch layout for the network and, for example, needs for optical transceivers. For the DHX H100, from a simple analysis of a scalable unit of 1, we can estimate a 2x attach rate for 800G transceivers per GPU.
Fig. 6: NVIDIA Reference design architecture for DGX H100 provided here. Source: NVIDIA Company website
An NVL72 rack has 18 compute trays, each equipped with two 1.6Tbps ports, i.e. a total of 36 ports. COHR management called out this as an opportunity for 1.6T transceivers, and an 18x (=(36x1.6)/(4x0.8)) increase in bandwidth relative to the previous architecture. As I discuss in the valuation section, this level of increase in TAM is not assumed in my analysis. I assume the transceiver to GPU ratio scales from 2x in the H100 use case to 3x over the next several years. The optical transceiver per GPU ratio is expected by some industry participants to rise from 2x to 3-4x+ for high bandwidths and as cluster sizes rise [3]. The exact ratio will likely depend on the particular leaf-spine architecture, the number of layers, the density of interconnects, and the adoption of the necessary cooling assumed in the reference architecture. NVDA has not released yet a reference architecture for the DGX GB200 NVL72 superclusters.
5. A phase of prolonged weakness in the deeply cyclical side of the business is likely baked into the current valuation. While optical stocks, especially ones with leverage, are known to be quite volatile and unreliable in predicting the cashflow of the several moving parts, COHR has some offsets to consider:
The telecom segment has offsets from the transceiver use case outside the data center. Management guided for double-digit growth in revenues for this segment for 2H FY24E over 1H FY24, given a recovery from inventory digestion from telecom service providers. Sep 2023 quarter was called a bottom for the segment, and it saw 20%+ sequential growth in the Dec 2023 quarter. In the last earnings call, management cited demand rebound from the three major China telecom service provider customers driven by 400G deployments. Looking out for the next few years, this segment has a few secular drivers beyond design wins for the ZR/ZR+ coherent transceivers [4].
As datacenter build-out continues in this cycle, and even retrofits of older data centers to upgrade to accelerated computing, I think datacenter to datacenter interconnects DCI can be a substantial growth vector.
If edge AI picks up, as compute capacity is allocated closer to end users for achieving low-latency inference for real-world use cases, it could be a new growth vector for the telecom segment
SiC deal offers much-needed relief in the financial model, freeing up capital for investing in the business or paying down debt:
Automotive/industrial infrastructure players Mitsubishi Electric and Denso invested in Oct 2023 $1B for a 25% total stake in SiC LLC, formed SiC business (Coherent owns the rest 75%), which unit of Coherent at a $3B pre-money valuation (10x projected FY23E revenues). The secular thesis includes (i) electric vehicle (EV) penetration and platform changes that reduce charging time to accelerate wider adoption (ii) industrial RF devices, and broader power semis. While EV stocks are out of favor at this time, COHR plays in the investment space for EVs and, hence, is relatively less affected by near-term adoption trends and more diversified than commonly perceived.
The SiC segment is loss-making currently and had $175M in capex in FY22. So, the cash infusion helps fuel R&D and capex for the SiC segment. In turn, this helps the Japanese partners secure the supply of 150mm and 200mm SiC substrates and epitaxial wafers for power semis and industrial manufacturing.
As this segment becomes self-reliant for opex and capex, COHR management may focus on debt paydown for FY24E.
Monetizing SiC in the near term may be a myopic view, given management’s long-term thinking on SiC (long-term partnerships with other players in the ecosystem, e.g., GE) and vertical integration into the semi-manufacturing of power semi-devices. In Q&A from the Oct 2023 management call on this deal, management said they are open to all possibilities. If the SiC segment were to be eventually spun off, the possible upside is likely not baked into the current COHR valuation. At Wolfspeed’s de-rated ~5x NTM Rev multiple, assuming a 50%y/y revenue growth for FY25E, valuation estimates can get to ~$2.5B, which is not included in my valuation of COHR.
TAM Analysis:
Here, I triangulate between 3 disparate data points across the industry to converge on a TAM estimate for AI transceivers.
MRVL: At the Accelerated Infrastructure for AI event held last week on April 11 2024, Marvel suggested:
Interconnect to GPU attach ratio scaling non-linearly with cluster size.
Training clusters may have high attach ratios, but are outnumbered by inference clusters which could be much smaller in size (and power consumption) and run with low attach ratios
COHR: In OFC 2024 update, COHR provided a ~$6.6B AI transceiver TAM for 2028E
Per Marvel’s recent AI day, the interconnect to GPU ratio increases non-linearly as clusters scale to large numbers, for instance, 10x for 1M+ clusters.
Fig 7: My analysis of interconnect/GPU attach ratio Source: Marvel 'Accelerated Infrastructure for AI,' Investor deck, April 11, 2024. Note the logarithmic scale on the x-axis
As shown in Fig. 8, if AI servers were to reach 10%+ penetration by 2028E, for a 3x attach ratio of transceivers to GPUs, I model transceivers scaling from current estimates of 5M units to ~40M by 2028E. If unit prices ($/Gb) were to decline at a ~30% CAGR over 2023-28E (in line with price decline discussion earlier in the Overview section of this write-up), I get close to the $6B+ AI transceiver TAM provided by Coherent.
Fig. 8: My top-down analysis of transceiver TAM based on a changing mix of transceiver data rates (from Fig. 3), AI server penetration, unit prices, and transceiver/GPU attach ratios
One way to tie it back to training and inference use cases, as suggested at a high level by Marvel, is to segment the data center landscape by power consumption in MW. An illustrative framework assuming roughly a 1000W per GPU power consumption, a possible mix between training and inference clusters, and attach ratios in line with Marvel’s commentary is shown in Fig. 9
Fig. 9: An illustrative framework to tie back training and inference use cases and an assumption on datacenter mix to transceiver/GPU attach ratio. Using Marvel’s non-linear scaling, I assume a 5x+ ratio for large training clusters and a 2x ratio for inference clusters. This roughly translates to a 3x attach ratio on average. Per Marvel, the industry will likely converge to more inference clusters with a smaller number of GPUs and a few training clusters with a large number of (up to 1M+) GPUs.
Valuation:
I triangulated on valuation using multiples as well as DCF, and stress-tested the base model. A sample framework teasing out AI vs. non-AI exposures is shown below. I underwrite the 20%+ IRR for 2024-26E assuming a 15x NTM P/E exit multiple on CY2027E EPS of ~$6, which assumes their share reaching ~1/3 of the ~$6.6B AI transceiver market and non-AI revenues recovering close to 2023 levels. The non-AI segment is valued on par with cyclical industrial businesses.
Fig. 10: Illustrative framework to think through valuation in base, bull, and bear scenarios. At an exit NTM P/E of 15x for ~$6 CY2027E EPS, I underwrite a 20%+ IRR over 2024-26E.
Two points to note here:
(i) This analysis does not include any upside from a potential monetization of the SiC business.
(ii) This analysis also implicitly assumes a base-case 3x transceiver to GPU attach ratio. Given the rapid changes occurring in the AI landscape, it’s too early to tell how that ratio evolves. If the attach ratio were to scale higher in a non-linear manner with large clusters similar to how Marvel had envisioned in the AI infrastructure update earlier this month, there could be an upside to my AI revenue estimates for Coherent.
Risks:
If hyperscalers push out the adoption of faster switches (800G, 1.6T) while still ramping up 400G, optical stocks may disappoint relative to expectations.
As a levered AI beta play, COHR could see more outsized negative reactions than other AI names if hyperscalers, enterprises, and center semis such as NVDA or AMD hint at a pause in AI spending.
COHR has several non-AI moving parts, such as networking for telecom operators, lasers for industrial manufacturing, 3D sensing, display, and SiC wafers made for the EV supply chain, all of which tend to be cyclical.
Current inventory digestion at telecom service providers may be prolonged further than expected.
References
[1] L. Goode, “NVIDIA Hardware is Eating the World,” Wired Magazine, March 2024.
[2] K. Schmidtke, et al, "Moore's Law Redefined for AI/HPC," OFC 2024, Optica Publishing Group, March 2024.
[3] Fibermall, industry trade portal technical blog https://www.fibermall.com/blog/nvidia-blackwell-development-for-dac-lacc-1600g-osfp-xd.htm.
[4] Coherent, the company, also makes a special kind of transceivers, in a specialized field of optics called coherent optics, which refers to advanced modules, usually with a DSP (digital signal processor), for achieving higher data capacity over long distances
Appendix
Fig. 11: Comparative analysis of a sample valuation metric (P/E) is shown across a broad set of comps. I use a 22x multiple for the COHR AI segment in the bull case, which is below Innolight's current NTM multiple. The non-AI segments are valued as deep cyclical industrials, for example, in the base case, a couple of turns lower than GLW's NTM P/E. Source: Bloomberg, company financials
Disclaimer: This post is not a recommendation to buy or sell a security but rather intended for a discussion of key bull and bear debates, based on research of public information, including company websites and regulatory filings. This post does not constitute professional or investment advice. Past performance is not indicative of future results. Consult with a licensed professional if needed before making any investment decisions. Investing in public markets can be subject to severe short-term and long-term risks, including resulting in permanent loss of entire invested capital. This post represents my personal opinions and not those of any other entity. I and/or entities that I advise can change views and positioning, at any point in time for any reason, or lack thereof, without any notice. I and/or entities that I advise are not and will not be held liable for any outcomes, including any indirect or consequential damages. I could be absolutely wrong. This article has no guarantee to be accurate, complete, or current, and cannot be relied on as such. Please conduct your own due diligence.
Adoption/upgrade of optical interconnects to higher data rate transceivers at hyperscalers and enterprises: specifically continued ramp of 800G and initial shipments of 1.6T in CY2024E (around 1Q-2Q FY25E).
Appointment of thought-leaders and business-savvy industry professionals/ good communicators /capital allocators for the currently open CEO, and CFO positions, and a successful transition in leadership
Potential monetization of assets such as the SiC segment, where COHR received a $1B investment for a $3B pre-money valuation in 2023 from Mitsubishi Electric and Denso.
Recovery of non-AI cyclical parts of the business e.g. completion of inventory digestion at telecom operators and continued growth of Sep 2023 quarter’s bottom in the telecom networking segment
1 show sort by |
Are you sure you want to close this position COHERENT CORP?
By closing position, I’m notifying VIC Members that at today’s market price, I no longer am recommending this position.
Are you sure you want to Flag this idea COHERENT CORP for removal?
Flagging an idea indicates that the idea does not meet the standards of the club and you believe it should be removed from the site. Once a threshold has been reached the idea will be removed.
You currently do not have message posting privilages, there are 1 way you can get the privilage.
Apply for or reactivate your full membership
You can apply for full membership by submitting an investment idea of your own. Or if you are in reactivation status, you need to reactivate your full membership.
What is wrong with message, "".