Cloudera and NVIDIA expands AI capabilities with NVIDIA Microservices


Cloudera has announced its expanded collaboration with NVIDIA, as Cloudera Powered by NVIDIA will integrate enterprise-grade NVIDIA NIM microservices into Cloudera Machine Learning.

It is claimed that NVIDIA NIM and NeMo Retriever microservices let developers link AI models to their business data — including text, images, and visualisations, such as bar graphs, line plots, and pie charts — to generate highly accurate, contextually relevant responses.

Developers using these microservices can deploy applications through NVIDIA AI Enterprise, which provides optimised runtimes for building, customising and deploying enterprise-grade LLMs.

By leveraging NVIDIA microservices, Cloudera have stated that Cloudera Machine Learning will enable customers to unleash the value of their enterprise data under Cloudera management by bringing high-performance AI workflows, AI platform software, and accelerated computing to the data – wherever it resides.

In addition, Cloudera Machine Learning will integrate model and application serving powered by NVIDIA microservices to boost model inference performance across all workloads.

With this new AI model-serving functionality, Cloudera believes that customers can achieve fault-tolerance, low-latency serving, and auto-scaling for models deployed anywhere – from public and private clouds.

Cloudera Machine Learning will also offer integrated NVIDIA NeMo Retriever microservices to simplify the connection of custom LLMs to enterprise data. This capability will enable users to build retrieval-augmented generation (RAG)-based applications for production use.

Cloudera previously worked with NVIDIA to harness GPU-optimised data processing through the integration of the NVIDIA RAPIDS Accelerator for Apache Spark into the Cloudera Data Platform. Now, with the planned addition of NVIDIA microservices and integration with NVIDIA AI Enterprise, Cloudera claim that their Data Platform will uniquely deliver streamlined end-to-end hybrid AI pipelines.

Moving forwards, Cloudera guarantees that organisations across industries will have the ability to more quickly and intuitively build, customise and deploy LLMs that underpin transformative generative AI.

This includes applications such as coding co-pilots for speeding development time, chatbots for automating customer interactions and services, text summarisation apps for processing documents quickly, streamlined and contextual search, and much more.

These innovations maximise time-to-business value by making data and advanced AI processes easier and faster across the enterprise, increasing revenue generation and optimising cost.

“Cloudera is integrating NVIDIA NIM and CUDA-X microservices to power Cloudera Machine Learning, helping customers turn AI hype into business reality,” said Priyank Patel, Vice President of AI/ML Products at Cloudera. “In addition to delivering powerful generative AI capabilities and performance to customers, the results of this integration will empower enterprises to make more accurate and timely decisions while also mitigating inaccuracies, hallucinations, and errors in predictions – all critical factors for navigating today’s data landscape.”

“Enterprises are eager to leverage their massive volumes of data for generative AI to build custom copilots and productivity tools,” said Justin Boitano, Vice President of Enterprise Products at NVIDIA. “The integration of NVIDIA NIM microservices into the Cloudera Data Platform offers developers a way to more easily and flexibly deploy LLMs to drive business transformation.”

To read more Cloudera news, click here.


Related posts

Scroll to Top