Welcome to our dedicated page for Nvidia Corporation news (Ticker: NVDA), a resource for investors and traders seeking the latest updates and insights on Nvidia Corporation stock.
NVIDIA Corporation (NASDAQ: NVDA) operates in the semiconductor and related device manufacturing industry and describes itself as the world leader in AI and accelerated computing. The NVDA news stream highlights how the company’s technologies and partnerships shape AI platforms, data center infrastructure, robotics, autonomous vehicles and scientific computing.
Recent NVIDIA news includes announcements about the NVIDIA Rubin platform, which combines the Vera CPU, Rubin GPU, NVLink 6 switch, ConnectX‑9 SuperNIC, BlueField‑4 DPU and Spectrum‑6 Ethernet switch into an AI platform aimed at reducing training time and inference token costs. Other updates cover NVIDIA BlueField‑4 powering an AI‑native storage infrastructure called the NVIDIA Inference Context Memory Storage Platform, designed to support long‑context, multi‑agent AI systems.
The NVDA news feed also features sector‑specific developments. In life sciences, NVIDIA has announced expansions of the NVIDIA BioNeMo platform and a co‑innovation lab with Eli Lilly and Company to apply AI to drug discovery and related workflows. In robotics and physical AI, NVIDIA has introduced new open models such as NVIDIA Cosmos and Isaac GR00T, along with frameworks like Isaac Lab‑Arena and OSMO, and highlighted partners unveiling new robots built on NVIDIA technologies.
For autonomous vehicles, NVIDIA news includes the launch of the Alpamayo family of open AI models, simulation tools and datasets for reasoning‑based AV development. Additional items cover the Nemotron 3 family of open models for agentic AI and a strategic partnership with Synopsys to apply accelerated computing to engineering and design. Investors and observers can use the NVDA news page to follow product launches, ecosystem collaborations, AI platform updates and regulatory communications that illustrate how NVIDIA positions its technology across industries.
Amazon announces the launch of its new EC2 P5 instances, optimized for NVIDIA Hopper GPUs, aimed at enhancing generative AI training and inference capabilities. This collaboration with NVIDIA aims to deliver up to 20 exaFLOPS of compute performance. The instances are designed to support large language models (LLMs) and complex AI applications, featuring up to 16 petaFLOPs of mixed-precision performance and reduced training costs by 40%. Customers can scale to over 20,000 H100 GPUs using EC2 UltraClusters, enhancing their AI development capabilities.
Adobe and NVIDIA have announced a new partnership to co-develop advanced generative AI models aimed at enhancing creative workflows. This collaboration will deeply integrate generative AI into Adobe’s flagship products, including Photoshop and Premiere Pro, as well as through NVIDIA's Picasso cloud service. The partnership emphasizes the importance of content transparency through Adobe's Content Authenticity Initiative, which ensures users can identify AI-generated content. The initiative aims to provide safe tools for creators, promoting commercial viability while enabling third-party developers to access generative AI capabilities.
NVIDIA has launched its H100 Tensor Core GPU, designed to meet the increasing demand for generative AI training and inference. Major cloud providers, including Oracle, AWS, and Microsoft Azure, are now offering H100 GPU-based services. The H100 technology boasts significant performance improvements, featuring nine times faster AI training and up to thirty times faster inference compared to the A100. Early adopters, such as OpenAI and Meta, are leveraging the H100 for their AI initiatives. The DGX H100 supercomputers are now in full production and available worldwide, indicating strong market demand for advanced AI computing solutions.
NVIDIA has launched four new inference platforms designed for generative AI applications, enhancing developers' capabilities to create specialized AI services. These platforms utilize the latest NVIDIA Ada, Hopper, and Grace Hopper processors, including the L4 Tensor Core GPU and H100 NVL GPU. Notably, the L4 GPU offers 120x more AI video performance than CPUs, while the H100 NVL is optimized for large language models like ChatGPT, delivering up to 12x faster performance than its predecessor. Google Cloud is integrating these platforms into its machine learning service, Vertex AI, with early adopter organizations already leveraging their capabilities.
NVIDIA has launched the BioNeMo Cloud Service, expanding its generative AI cloud services aimed at revolutionizing drug discovery, protein engineering, and life sciences research. This service allows researchers to customize AI models using proprietary data, accelerating the drug development process. Early access customers like Amgen and several startups are already utilizing the platform for significant advancements in biologics discovery. BioNeMo includes pretrained AI models that facilitate rapid identification and design of drug molecules, potentially reducing costs and time in drug development.
NVIDIA has launched NVIDIA AI Foundations, a suite of cloud services enabling enterprises to create and customize generative AI models tailored to specific needs. Esteemed companies like Getty Images, Morningstar, and Shutterstock will leverage these services for applications across language, images, video, and 3D. The NeMo and Picasso services facilitate the development of proprietary AI applications, enhancing productivity in creative workflows. Additionally, notable partnerships, including one with Adobe, aim to redefine generative AI in industry.
NVIDIA has launched its DGX Cloud, an AI supercomputing service that enables enterprises to access dedicated clusters of NVIDIA DGX AI supercomputing resources via a web browser. This service streamlines the process of training advanced models for generative AI. Initially hosted on Oracle Cloud Infrastructure, it will expand to Microsoft Azure and Google Cloud. Notable clients like Amgen are utilizing DGX Cloud to enhance drug discovery processes, achieving training speeds up to 3x faster. Instances start at $36,999 per month, allowing organizations to scale AI development efficiently.
NVIDIA has partnered with Microsoft to offer cloud-based services through Microsoft Azure, giving enterprise users access to advanced AI supercomputing and industrial metaverse applications. The collaboration includes NVIDIA DGX Cloud, which will provide AI supercomputing capabilities, and NVIDIA Omniverse Cloud, a platform for developing 3D applications. Additionally, Microsoft 365 applications will be integrated with the Omniverse platform, enhancing digitalization processes across enterprises. With services available by mid-2023, this partnership aims to transform business operations by combining real-time data with advanced AI and digital twin technologies.
NVIDIA announced that Oracle Cloud Infrastructure (OCI) has integrated the BlueField-3 Data Processing Units (DPUs) into its networking stack. This third-generation DPU aims to offload data center tasks from CPUs, thereby enhancing performance, efficiency, and security. BlueField-3 enables up to 24% power reduction and increases compute power and storage processing capabilities up to 4x compared to prior models. NVIDIA's advancements are intended to meet the increasing demands of AI workloads and solidify OCI's position in sustainable, high-performance cloud infrastructure.
NVIDIA has made a significant breakthrough in computational lithography with its new cuLitho software library, which is being integrated into processes by industry leaders like TSMC, ASML, and Synopsys. This advancement aims to push the limits of semiconductor manufacturing as current methods reach physical constraints. cuLitho offers performance improvements of up to 40x, enabling greater energy efficiency and faster production rates, potentially allowing for 3-5 times more photomasks daily while using 9x less power. This innovation supports the development of 2nm technology and promises to reduce bottlenecks in chip design.