r/wallstreetbets Mar 16, 08:55 PM
📊 DD: Jensen Says AI Spend Hits $1T by 2027 — That Money Has to Flow to Hardware Like QCOM 📊 DD: Jensen Says AI Spend Hits $1T by 2027 — That Money Has to Flow to Hardware Like QCOM Just doing some digging on Qualcomm (QCOM) and wanted to share a few reasons I think it’s undervalued. **Disclosure:** I own QCOM shares. **This is not financial advice. Do your own research.** --- # 📉 Valuation Looks Discounted - QCOM trades **below many semiconductor peers on valuation multiples** - The stock is still **well below prior highs despite strong fundamentals** - Qualcomm has **strong free cash flow and high operating margins** - Its **licensing business alone is an extremely profitable moat** Yet the market still largely values Qualcomm like it's **just a smartphone chip company**. That narrative is increasingly outdated. --- # 💰 Strong Fundamentals Qualcomm continues to generate massive cash flow: - ~**25–30% operating margins** - **High free cash flow yield** - **Licensing royalties across the global smartphone ecosystem** That licensing model is incredibly powerful. Every generation of wireless standards (3G, 4G, 5G and eventually 6G) continues to feed this revenue stream. --- # 🚗 Growth Beyond Smartphones Qualcomm is expanding aggressively into multiple compute markets: ### Automotive - Snapdragon Digital Chassis - ADAS and autonomous compute - Software-defined vehicles Automotive revenue already **exceeds $1B per quarter**, and Qualcomm is targeting roughly **$9B by 2029**. Customers include: - Volkswagen - BMW - GM - Hyundai - NIO - multiple others Cars are becoming **rolling data centers**, and Qualcomm wants to be the compute platform. --- # 🧠 The Overlooked Piece: AI200 One of the most overlooked parts of Qualcomm’s strategy is **AI inference hardware**. Qualcomm’s **AI200 / Cloud AI 100 accelerator** targets: - **Data center AI inference** - **Large language model serving** - **Edge AI workloads** - **Energy-efficient AI compute** This matters because **AI inference will likely dwarf AI training workloads** over time. GPUs dominate **training**, but inference can run on specialized accelerators. The key advantage Qualcomm pushes here: 👉 **Performance per watt** Inference is about **efficiency at scale**, not just raw power. For hyperscalers running millions of AI queries, **power efficiency becomes critical**. --- # 🤖 Tie-In to Jensen Huang’s $1T AI Spend Prediction NVIDIA CEO Jensen Huang has repeatedly said he believes **AI infrastructure spending could exceed $1 trillion by 2027**. That money doesn't just go into software. It flows into **hardware infrastructure**, including: - GPUs - AI accelerators - networking - edge compute - automotive compute - inference silicon If **$1T is being spent on AI infrastructure**, a massive portion of that budget must flow into **semiconductor hardware**. Companies positioned for that include: - NVIDIA - AMD - Broadcom - **Qualcomm** Most people associate QCOM only with **phones**, but Qualcomm is increasingly building **AI compute platforms across multiple markets**. That includes: - **AI