Reliant uses AI to automate labor-intensive tasks in research, like literature reviews, freeing up time for meaningful work and advancing science.
Although AI models have demonstrated a wide range of capabilities, what jobs do we want them to perform? Ideally, it is tedious work that is abundant in academics and research. Reliant intends to specialise in the kind of labour-intensive data extraction job that currently occupies the time of exhausted interns and graduate students.
CEO Karl Moritz stated, “Reducing menial labour and letting people do the things that are important to them is the best thing you can do with AI to improve the human experience.” One of the most prevalent types of this “menial labour” in the research field, where he and co-founders Marc Bellemare and Richard Schlegel have worked for years, is the literature review.
Every study references earlier and related research, but location takes time in the scientific community, which can be challenging. And some employ or quote thousands of data points, such as systematic reviews.
Moritz noted that for one study, “the authors had to review 3,500 scientific publications, many of which turned out to be irrelevant.” This was something that AI should automate because extracting a small quantity of meaningful information takes a lot of effort.
They were confident that contemporary language models could handle it since, in one experiment, ChatGPT was able to extract data with an error rate of only 11%. It’s impressive, like many things that LLMs can accomplish, but it needs to catch up to what people genuinely need.
Moritz remarked, “That’s just not good enough.” Even though these knowledge assignments seem simple, you must complete them without error.
While Tabular, Reliant’s flagship product, is partially based on an LLM (LLaMa 3.1), it is far more successful because of additional patented approaches. They said that the multi-thousand-study extraction above completed the same task flawlessly.
In other words, you throw in a thousand papers and tell Reliant what you want out of them (this, that, and other data), and it searches through them and finds the information, whether it is precisely labelled and structured or, more often, not. Subsequently, the data and any analysis you requested are shown in a user-friendly interface, allowing you to explore specific examples.
Moritz stated, “We see our role as helping the users find where to spend their attention. Our users need to be able to work with all the data all at once, and we’re building features to allow them to edit the data that’s there or go from the data to the literature.”
While less eye-catching than a virtual buddy, this specific and valuable use of AI is much more practical and has the potential to advance science in many highly technical fields. A $11.3 million seed round led by Tola Capital and Inovia Capital, with participation from angel Mike Volpi, shows investors have noticed.
Reliant AI Technology
Reliant’s technology is very compute-intensive, just like any other AI application. Therefore, the firm decided to buy its hardware instead of renting it as needed from a significant source. Bringing hardware in-house has advantages and disadvantages. The former requires you to make these costly devices profitable, but the latter allows you to explore new areas with specialised computing.
“We have discovered that providing a thorough response in a time-constrained manner can be extremely difficult,” Moritz said. For example, a scientist requests that the system carry out an innovative extraction or analysis task over a hundred publications. Either swiftly or effectively, but not both, unless they anticipate questions from consumers and have the solution, or a close substitute, ready in advance.
The startup’s chief scientific officer, Bellemare, stated, “Many people have the same questions, so we can find the answers before they ask as a starting point.” “Even though it might not be exactly what you want, we can condense 100 pages of text into something else that will be easier for us to work with.”
Consider this: Would you wait to go through and obtain the characters’ names until someone requests to extract the message from a thousand novels? Or would you merely complete the task in advance, anticipating that the data would be needed, along with other details like dates, locations, relationships, etc.? The latter, for sure, assuming you had more computer power.
Additionally, the models have more time to address the inescapable assumptions and ambiguities that arise in various scientific fields because of this pre-extraction. In pharmaceuticals, a metric’s meaning may differ from that of pathology or clinical trials when it “indicates” another. Furthermore, language models frequently produce distinct results based on the questions posed to them. Therefore, Reliant’s role has been to convert uncertainty into certainty; as Moritz put it, “You can only do this if you’re willing to invest in a particular science or domain.”
As a business, Reliant prioritises proving the technology is profitable before taking on more challenging projects. “You need to start with something concrete, but you also need to have a big vision to make interesting progress,” Moritz remarked. From the perspective of a company that wants to survive, we concentrate on for-profit businesses since they provide the funding we need to buy GPUs. Customers won’t be offered something at a loss by us.
The company may come under pressure from implementation partners like Cohere and Scale and startups like OpenAI and Anthropic, which are investing heavily in more organised jobs like database management and coding. “We’re building this on a groundswell — Any improvement in our tech stack is great for us,” Bellemare said, sounding upbeat. There are about eight big machine learning models, the rest of which are exclusive to us and were created from scratch using our own data. The LLM is one of them.
The biotech and research industries are starting to transition into AI-driven ones, which might take years to materialise fully. However, Reliant has established a solid foundation.
Moritz said, “It’s great if you want the 95% solution and you just apologise profusely to one of your customers occasionally.” “We’re for situations where errors matter and recall and accuracy are crucial. And to be honest, it suffices; we’re content to let others handle the rest.