STOCK TITAN

Notifications

Limited Time Offer! Get Platinum at the Gold price until January 31, 2026!

Sign up now and unlock all premium features at an incredible discount.

Read more on the Pricing page

FriendliAI and Hugging Face Announce Strategic Partnership

Rhea-AI Impact
(No impact)
Rhea-AI Sentiment
(Very Positive)
Tags
partnership

FriendliAI and Hugging Face have announced a strategic partnership that integrates FriendliAI's accelerated generative AI infrastructure service with the Hugging Face Hub. The collaboration enables developers to deploy and serve models directly through FriendliAI Endpoints, which is now available as a deployment option on the Hugging Face platform.

According to Artificial Analysis, FriendliAI Endpoints is the fastest GPU-based generative AI inference provider. The partnership addresses challenges in production-scale AI deployment by offering automated infrastructure management through Friendli Dedicated Endpoints, providing dedicated GPU resources and automatic resource management.

The integration aims to democratize AI by combining Hugging Face's platform accessibility with FriendliAI's high-performance infrastructure, allowing developers to focus on innovation while benefiting from efficient, cost-effective model deployment.

Loading...
Loading translation...

Positive

  • Recognition as fastest GPU-based generative AI inference provider by Artificial Analysis
  • Strategic partnership with major AI platform Hugging Face expanding market reach
  • Automated infrastructure management solution reducing operational complexity

Negative

  • None.

News Market Reaction 1 Alert

-0.92% News Effect

On the day this news was published, KB declined 0.92%, reflecting a mild negative market reaction.

Data tracked by StockTitan Argus on the day of publication.

  • Developers will be able to utilize FriendliAI's accelerated generative AI infrastructure service to deploy and serve models in the Hugging Face Hub

REDWOOD CITY, Calif., Jan. 22, 2025 /PRNewswire/ -- FriendliAI, a leader in accelerated generative AI inference serving, and Hugging Face today announced a strategic partnership that allows developers to utilize FriendliAI's inference infrastructure service to deploy and serve models directly in the Hugging Face Hub.

FriendliAI Endpoints, the fastest GPU-based generative AI inference provider according to Artificial Analysis, is now available as a deployment option on the Hugging Face platform. Directly from any model page on Hugging Face, developers can now easily deploy models using FriendliAI's accelerated, low-cost inference endpoints. This partnership leverages the convenience of Hugging Face's platform alongside FriendliAI's high-performance infrastructure, enabling developers to streamline their AI development workflow and focus on innovation.

Setting up and deploying generative AI models at production scale presents challenges such as complex infrastructure management and high operational costs. Friendli Dedicated Endpoints handles the hassle of infrastructure management, enabling developers to deploy and serve generative AI models efficiently on autopilot. Powered by FriendliAI's GPU-optimized inference engine, Friendli Dedicated Endpoints delivers fast and cost-effective inference serving as a managed service with dedicated GPU resources and automatic resource management.

The addition of FriendliAI as a key inference provider advances Hugging Face's mission to democratize AI, while furthering FriendliAI's mission to empower everyone to harness the full potential of generative AI models with ease and cost-efficiency. With this partnership, FriendliAI becomes a strategic inference provider for Hugging Face.

"FriendliAI and Hugging Face share a vision for making generative AI, and further agentic AI, more accessible and impactful for developers," said Byung-Gon Chun, CEO of FriendliAI. "This partnership gives developers on Hugging Face easy access to FriendliAI Endpoints, a fast, low-cost inference solution without the burden of infrastructure management. We're excited to see what the amazing developer community at Hugging Face will build with our inference solution, and we look forward to any future opportunities to partner with Hugging Face to provide developers with even more powerful tools and resources."

"FriendliAI has been at the forefront of AI inference acceleration progress," said Julien Chaumond, CTO of Hugging Face. "With this new partnership, we will make it easy for Hugging Face users and FriendliAI customers to leverage leading optimized AI infrastructure and tools from FriendliAI to run the latest open-source or their custom AI models at scale."

About FriendliAI

FriendliAI is the leading provider of accelerated generative AI inference serving. FriendliAI provides fast, cost-efficient inference serving and fine-tuning to accelerate agentic AI and custom generative AI solutions. Enjoy the GPU-optimized, blazingly fast Friendli Inference through FriendliAI's Dedicated Endpoints, Serverless Endpoints, and Container solutions. Learn more at https://friendli.ai/

About Hugging Face

Hugging Face is the leading open platform for AI builders. The Hugging Face Hub works as a central place where anyone can share, explore, discover, and experiment with open-source ML. Hugging Face empowers the next generation of machine learning engineers, scientists, and end users to learn, collaborate and share their work to build an open and ethical AI future together. With the fast-growing community, some of the most used open-source ML libraries and tools, and a talented science team exploring the edge of tech, Hugging Face is at the heart of the AI revolution.

Contacts:
Elizabeth Yoon, FriendliAI, press@friendli.ai

Cision View original content:https://www.prnewswire.com/news-releases/friendliai-and-hugging-face-announce-strategic-partnership-302357253.html

SOURCE FriendliAI

FAQ

What does the FriendliAI and Hugging Face partnership mean for developers?

The partnership allows developers to deploy and serve AI models directly through FriendliAI Endpoints on the Hugging Face Hub, providing faster, low-cost inference capabilities with automated infrastructure management.

How does FriendliAI Endpoints improve AI model deployment?

FriendliAI Endpoints provides GPU-optimized inference serving, automated infrastructure management, dedicated GPU resources, and cost-effective deployment solutions for generative AI models.

What are the key benefits of FriendliAI's integration with Hugging Face?

The integration offers streamlined AI development workflow, reduced operational costs, automated resource management, and access to high-performance infrastructure without complex management requirements.

What performance advantages does FriendliAI offer through this partnership?

According to Artificial Analysis, FriendliAI Endpoints is the fastest GPU-based generative AI inference provider, offering optimized performance for model deployment and serving.
KB Finanical Group

NYSE:KB

KB Rankings

KB Latest News

KB Latest SEC Filings

KB Stock Data

31.72B
361.00M
5.76%
0.15%
Banks - Regional
Financial Services
Link
South Korea
Seoul