Open Platform for Enterprise AI, Opea, Launched by The Linux Foundation

Founded in 2018, LF AI and Data brings together open source projects dedicated to AI within the Linux Foundation. Since then, this entity of the Linux foundation has come a long way and developed a number of projects around artificial intelligence. Today, it announced the Open Platform for Enterprise AI, or “OPEA.”

The initiative brings together key industry players including Anyscale, Cloudera, Datastax, Domino Data Lab, Hugging Face, Intel, KX, MariaDB Foundation, Minio, Qdrant, Red Hat, SAS, VMware (acquired by Broadcom), Yellowbrick Data and Zilliz. The objective: to promote the development of open, multi-vendor, modular and robust generative AI systems.

Advertisement

Open source infrastructure standards for developing AI applications


“We are delighted to welcome OPEA to LF AI & Data with the promise of offering enterprises open source, standardized, modular and heterogeneous augmented recovery generation (RAG) pipelines, with a focus on development “open templates, enhanced and optimized support for various compilers and toolchains”indicates the LF AI and Data.

Based on this, companies can develop artificial intelligence applications and use cases across a number of verticals. Through its platform, the companies behind the initiative seek to solve the problem of fragmentation by working with the industry to standardize components, including frameworks, architectural blueprints and reference solutions that highlight enterprise-level performance, interoperability, reliability, and readiness.

For Ibrahim Haddad, executive director of LF AI & Data, “This initiative demonstrates our mission to drive open source innovation and collaboration within the AI ​​and data communities under a neutral and open governance model.”

Intel, at the forefront of the initiative


“Developers tasked with creating value face a dizzying number of choices when it comes to incorporating generative AI. At this early stage, open source collaboration can establish a robust, concrete framework from which to build and evaluate modular generative AI solutions”, explains Rachel Roumeliotis, director of open source strategy at Intel in a press release.

Advertisement

The firm thus provides some keys to OPEA's ambition: “OPEA provides a holistic and simple view of a generative AI workflow that includes retrieval augmented generation (RAG) as well as other features as needed. The building blocks (open and proprietary) of this workflow work include large language models (LLM), big vision models (LVM), multimodal models, etc.

Also mentioned are data ingestion/processing, integration of models/services, data stores, prompt engineering, memory systems, etc.

A chatbot and an Intel co-pilot on Gaudi 2

In the coming months, the open source community and partner companies will add and evolve this framework, Intel says, to meet the needs of developers. As of today, the firm is committed to placing a set of reference implementations in the OPEA Github repository.

This includes a chatbot on Xeon 6 and Gaudi 2, a visual question and answer (VQA) system on Gaudi 2 as well as a co-pilot designed for code generation in Visual Studio Code on Gaudi 2.

Do you want to stay up to date on the latest news in the artificial intelligence sector? Register for free to the IA Insider newsletter.

Selected for you

Advertisement