• bitcoinBitcoin$103,910.44-3.92%
  • ethereumEthereum$3,831.62-4.33%
  • rippleXRP$2.47-6.44%
  • solanaSolana$215.28-2.95%
  • binancecoinBNB$710.51-3.01%

French AI Startup FlexAI Debuts with $30M for Cloud Service

French AI Startup FlexAI Debuts with $30M for Cloud Service

French AI Startup FlexAI, secures substantial pilot investment to revolutionize AI compute infrastructure.

A French startup has secured a substantial pilot investment to “rearchitect compute infrastructure” for developers who want to construct and train AI applications more efficiently.

Since October 2023, FlexAI has been operating covertly under the radar as a Paris-based organization. However, on Wednesday, the company formally debuted with funding of €28.5 million ($30 million) and began promoting its first product, an AI training on-demand cloud service.

A substantial change for a seed round typically signifies a progenitor with a notable lineage; in this instance, this is the case. Brijesh Tripathi, co-founder and current CEO of FlexAI, gained experience as an AI darling and former senior design engineer at Nvidia, a GPU behemoth. Subsequently, he held several senior engineering and architecting positions at Apple, Tesla (where he reported directly to Elon Musk), Zoox (before Amazon acquired the autonomous driving startup), and, most recently, AXG, Intel’s AI and supercomputing platform offshoot.

Dali Kilani, co-founder and CTO of FlexAI, has an equally remarkable resume, having held various technical positions at companies such as Zynga and Nvidia and most recently serving as CTO of Lifen. This French startup develops digital infrastructure for the healthcare industry.

Alpha Intelligence Capital (AIC), Elaia Partners, and Heartcore Capital led the seed round, which also included First Capital, Motier Ventures, Partech, and InstaDeep CEO Karim Beguir.

FlexAI team in Paris. Image Credits: FlexAI

The computational dilemma

To comprehend Tripathi and Kilani’s endeavors with FlexAI, it is imperative first to grasp the challenges developers and AI practitioners encounter when attempting to access “compute.” This pertains to the infrastructure, resources, and processing power required to execute computational tasks, including data processing, algorithm execution, and machine learning model implementation.

Tripathi told TechCrunch, “Using any infrastructure in the AI space is complex; it is not for the faint of heart, nor the inexperienced.” “Before it can be utilized, an excessive amount of knowledge regarding infrastructure construction is necessary.”

On the contrary, the public cloud ecosystem has significantly developed over the last two decades, exemplifying how the industry has arisen from developers’ need to construct applications with minimal concern for the underlying infrastructure.

Tripathi stated, “If you are a small developer interested in creating an application, you can do so without knowing where it is running or its back end; simply launch an EC2 [Amazon Elastic Compute Cloud] instance, and you’re done.” “That is currently impossible with AI computing.”

In artificial intelligence, the developers are responsible for determining how many GPUs (graphics processing units) must be interconnected and over what type of network. They establish a software ecosystem to accomplish this. In the event of a GPU or network failure or any other malfunction in that sequence, the developer is responsible for resolving the issue.

“We aim to achieve the same level of simplicity in AI compute infrastructure as we have in general—purpose cloud infrastructure—that is an accomplishment in itself, but there is no reason why AI compute cannot experience the same advantages after two decades,” Tripathi explained. We aim to reach a point where operating AI workloads does not necessitate that you acquire expertise in data centers.”

Later this year, after a limited number of beta consumers have evaluated the current iteration of its product, FlexAI will introduce its first commercial product. It functions as a cloud service that connects developers and “virtual heterogeneous compute,” enabling them to execute their workloads and deploy artificial intelligence models across various architectures by reimbursing for consumption rather than leasing GPUs hourly.

GPUs are crucial in advancing artificial intelligence (AI), facilitating training, and executing large language models (LLMs). Nvidia is a prominent participant in the GPU industry and has benefited significantly from the artificial intelligence revolution initiated by OpenAI and ChatGPT. Since OpenAI introduced an API for ChatGPT in March 2023, enabling developers to integrate ChatGPT functionality into their applications, Nvidia’s stock value has increased from approximately $500 billion to over $2 trillion in the preceding twelve months.

The technology sector is flooded with LLMs, and GPU demand is escalating. However, GPU operation is costly, and renting them for ad hoc use cases or smaller tasks is not always practical and can be prohibitively expensive; thus, AWS has experimented with time-limited rentals for smaller AI projects. However, renting remains renting, so FlexAI aims to abstract away the underlying complexities so that clients can utilize AI computation only when necessary.

“Multicloud optimized for AI.”

The premise underlying FlexAI is that most developers are indifferent to the manufacturer of their GPUs or processors, be it Nvidia, AMD, Intel, Graphcore, or Cerebras. Their primary concern is developing artificial intelligence and constructing applications within their budgetary constraints.

This is the point at which FlexAI’s “universal AI compute” concept is implemented: FlexAI allocates the user’s specifications to the most suitable architecture for the given task, handling all conversions across platforms such as AMD’s ROCm, Intel’s Gaudi infrastructure, and Nvidia’s CUDA.

“This means that the developer’s sole focus is on constructing, training, and utilizing models,” explained Tripathi. “We attend to everything beneath.” We handle failures, recovery, and dependability; you only pay for the resources that you utilize.

FlexAI aims to accelerate what has already occurred in the cloud for AI in numerous respects; this entails more than merely duplicating the pay-per-use model. It involves the capacity to transition to “multi-cloud” by capitalizing on the unique advantages of various GPU and processor infrastructures.

FlexAI will allocate a client’s particular workload in accordance with their stated priorities. A business that has a restricted financial resource for the development and optimization of its AI models may establish a constraint on that budget via the FlexAI platform to optimize computational resources. This may necessitate utilizing Intel’s cheaper (yet slower) computer, but if a developer has a limited run that demands the fastest output possible, Nvidia can be used instead.

In essence, FlexAI functions as an “aggregator of demand,” renting hardware through conventional channels and securing preferential pricing through its “strong connections” with the personnel at Intel and AMD, which it subsequently distributes to its clientele. This doesn’t inherently mean bypassing the market leader Nvidia. Still, it does mean that Intel and AMD may have a substantial incentive to experiment with aggregators like FlexAI, given that they compete for GPU scraps left in Nvidia’s wake.

“Intel and AMD will be ecstatic if I can make it work for them and onboard tens to hundreds of customers to their infrastructure,” Tripathi stated.

This contrasts with other prominent GPU cloud competitors, including the heavily invested CoreWeave and Lambda Labs, which focus solely on Nvidia hardware.

Tripathi stated, “I intend to elevate AI computing to the level of general-purpose cloud computing at present.” AI is incapable of multi-cloud. You are responsible for the hardware, number of GPUs, infrastructure, and connectivity, in addition to performing the necessary maintenance. Currently, that is the sole viable approach to obtaining AI computation.

Tripathi responded that he could not confirm the identities of all the launch partners at that time, citing the absence of “formal commitments” from a subset of them.

“Intel is an unquestionably strong partner; they provide infrastructure, and AMD is also an infrastructure-providing partner,” he stated. “Memorandums of understanding [MOUs] are currently being signed with Nvidia and a few other silicon companies as part of a second layer of partnerships that we are not yet prepared to disclose but are all in the running.”

The effect of Elon

Tripathi is well-prepared to confront the forthcoming challenges, given her extensive experience working for some of the largest technology companies globally.

Tripathi stated, “I have sufficient knowledge of GPUs; I used to manufacture them.” His seven-year tenure at Nvidia concluded in 2007 when he left the company to join Apple during the introduction of the first iPhone. “At Apple, my primary objective was to resolve tangible customer issues.” I was present during the initial stages of Apple’s SoC (system on chip) development for mobile devices.

Tripathi also held the position of hardware engineering lead at Tesla from 2016 to 2018, where he worked directly under Elon Musk for the final half-year following the abrupt departures of two individuals in higher-ranking positions.

“What I learned at Tesla and what I’m incorporating into my startup is that the only limitations are those imposed by science and physics,” he explained. “The current state of affairs fails to reflect the ideal or necessary course of action.” “To determine the correct course of action from the ground up, you must eliminate every black box.”

Tripathi contributed to Tesla’s transition to manufacturing its chips, which has since been adopted by GM and Hyundai, among other manufacturers.

“Upon joining Tesla, one of my initial responsibilities was determining the quantity of microcontrollers contained within a vehicle. To accomplish this, our team had to sift through a number of those enormous black boxes encased in metal shielding and casing to locate these minuscule microcontrollers,” Tripathi explained. “In the end, we laid that out on a table and informed Elon, ‘Elon, a vehicle contains fifty microcontrollers.'” In addition, we occasionally pay one thousand times the margin on them because they are encased in a massive metal casing. He then suggested, “Let’s go make our own.” Furthermore, we accomplished that.”

GPUs as a security

FlexAI also intends to construct its data center infrastructure in the distant future. Tripathi stated that this will be financed through debt financing, continuing a recent trend in which industry competitors, such as Lambda Labs and CoreWeave, have used Nvidia processors as collateral to secure loans rather than donating more equity.

“GPUs are now usable as collateral by banks,” Tripathi explained. “Why distribute equity? We cannot obtain the hundreds of millions of dollars required to construct data centers until our company’s value increases to that of a legitimate computing provider. If we only invested in equity, we would vanish once the capital is gone. However, should we pledge GPUs as collateral, they may seize them and relocate them to an alternative data center.

Previous Article

Russia's Oil, Gas Revenue To Double In April

Next Article

FTX To Sell Solana Tokens Via Auction