We use cookies to ensure you have the best possible experience. If you click accept, you agree to this use. For more information, please see our privacy policy.
/

Innovation

Boosting pharmaceutical lab efficiency with AI voice assistants

Mariam Jabara
Mariam Jabara
10
min read
Woman working in a pharmaceutical laboratory

In recent years, there has been a strong trend towards digitization, optimization and streamlining of workflows in commercial and clinical laboratories, such as those in pharmaceutical companies. In addition to the emergence of Electronic Laboratory Notebooks (ELNs) and Laboratory Information Management Systems (LIMS), artificial intelligence (AI) is being explored as a means to further increase efficiency and accuracy of laboratory workflows, and even integrate with pre-existing software solutions. 

According to an analysis of various use-cases of AI in the pharmaceutical industry by PWC where four business areas were examined: operations, research and development (R&D), commercial, and enabling functions, AI uses cases in operations were found to be most impactful (accounting for 39% of overall impact) with R&D and commercial use cases following at 26% and 24%, respectively. Examples of operational use cases may include inventory management or automating data capturing and analysis. Their report also posits a potential doubling of operating profits through application of AI in various areas of the business. Assuming a high degree of adoption and industrialization through AI, this could lead to a $254bn increase in operating profits worldwide by 2030 according to the PWC analysis. 

Recently, we had the pleasure of supporting a large pharmaceutical company in piloting an AI-powered voice assistant to streamline laboratory workflows and inventory management in their lab. They sought out to centralize and increase the accuracy of their inventory management and reduce time spent on data entry and record keeping during experimental protocols. We aimed to increase efficiency in the lab by developing a voice assistant that is able to guide scientists through workflows end-to-end. 

Pilot features

The most important features to pilot for our client were voice-powered data collection and record keeping during experimental protocols and voice-powered inventory management. Additionally, the ability to keep manual records up to date, especially inventory records, proved challenging with their current approach. Therefore, it was essential to not only enable scientists to record data and inventory usage using only their voice, but to enable the synchronization of user entered data with inventory management systems to ensure the items required for upcoming protocols were in stock and easily accessible. 

In lab environments, even seemingly small human errors can be costly, sometimes requiring the repetition of an experiment. Additionally, protocols may be complex and include time-sensitive steps, where pausing to record data, take notes, or look up next steps can be highly disruptive and inefficient. Enabling scientists to interface with their protocols and record data using their voices has great potential to unlock a more seamless experience in the lab, while decreasing errors and increasing operational efficiency. 

In summary, the common challenges in pharmaceutical labs that the voice assistant aimed to address are:

  • Manual data entry and transcription errors.
  • Time-consuming data retrieval processes.
  • Difficulty in maintaining accurate and up-to-date inventory records.

Design & Technology

At Osedea, our design team plays a pivotal role in discovery and product development. Through their user-centered design principles, they support our development team and clients alike in the design of features that integrate easily with existing workflows. They often identify areas that aren’t explicit pain points, but are opportunities for improvement. Often not within the user’s awareness as a problem, these additional touches greatly elevate the user experience when implemented. 

To build the prototype for this pilot project, we employed LangChain, OpenAI GPT4-o mini, OpenAI’s Whisper running locally, and function/tool calling through LangChain. For model observability, we leveraged LangSmith, a developer platform to enable debugging, collaborating, testing, and monitoring of LLM applications.

Largely, our application can be broken down into the following components: 

  1. Speech to text and voice activation
  2. Automated record filling 
  3. Inventory management

Selection of a speech to text model

Prior to selecting Whisper as our speech to text model, we performed a series of benchmarking experiments on public datasets, as well as an internally collected dataset specific to pharmaceutical terminologies. We examined metrics such as Word Error Rate (WER) and Real Time Factor (RTF), which is the ratio of time taken to transcribe the audio file to the duration of the audio. To learn more about our benchmarking process, read our deep-dive insights here

Function calling and natural language query processing

LangChain is a framework for the development of applications based on large language models (LLMs). LangChain provides tools in the form of interoperable building blocks to allow for the development of custom and complex chains, in addition to support for retrieval augmented generation (RAG) and integrations with other knowledge sources and cloud providers such as AWS, and MongoDB.  

Tool calling, or sometimes referred to as function calling, allows a model to respond to a user prompt by generating an output that matches a predefined schema, or to execute a set of tasks defined by the user. 

For example, if the model is prompted with a query that might benefit from using a search engine when responding, a call to a custom search engine tool would be appropriate. The model would then issue a call to the search engine, execute it and return the output to the LLM. In our case, based on the user’s voice command, we may want to call custom functions to execute the desired steps as provided by the scientists (e.g. take note that I used 50mL of buffer in step 15). A custom function call would then extract the appropriate arguments (amount of buffer used) and update the appropriate protocol record. Effectively, the model is parsing the user’s input to select the appropriate tool, extract the arguments, and call the function. 

Here’s a simple example to illustrate how a tool might be called.

First, let’s define some tools we might need in the lab:

from langchain_core.tools import tool

@tool
def percent_diff(a: float, b: float) -> float:
   """Calculates the absolute percent difference between two values, rounded to the nearest tenth.

   Args:
       a: first value
       b: second value
   """
   return round(abs(a-b)/(((a+b)/2))*100,1)

@tool
def percent_yield(theoretical: float, actual: float) -> float:
  """Calculates the percent yield of a product, relative to the theoretical yield, rounded to the nearest tenth.
Args:
theortical: the theoretical yield of the product
actual: the actual yield of the product
  """
  return round(actual/theoretical*100,1)

tools = [percent_diff, precent_yield]

Next, let’s bind the tool(s) to our language model of choice, and construct the tool calling agent:

from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate 

llm = ChatOpenAI(model="gpt-4o-mini", openai_api_key="your-api-key")

prompt = ChatPromptTemplate.from_messages(
   [
       (
           "system",
           "You are a helpful assistant to scientists in a lab. Make sure to use the available tools when answering questions that require calculations.",
       ),
       ("placeholder", "{chat_history}"),
       ("human", "{input}"),
       ("placeholder", "{agent_scratchpad}"),
   ]
)

# Construct the Tools agent
agent = create_tool_calling_agent(llm, tools, prompt)

Finally, let's execute the agent with our own user query!

# Execute the agent
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "What is the percent difference between 40 mL and 47 mL"})



> Entering new AgentExecutor chain...


Invoking: `percent_diff` with `{'a': 40, 'b': 47}`




The percent difference between 40 mL and 47 mL is 16.1%.


> Finished chain.




{'input': 'What is the percent difference between 40 mL and 47 mL',
 'output': 'The percent difference between 40 mL and 47 mL is 16.1%.'}

As we can see, the model recruited the correct tool to calculate the percentage difference, just as intended! We can extend this very simple example to handle more complicated queries, and even call APIs when appropriate to make updates or request appropriate information. 

Note: We used the latest gpt-4o-mini model as our LLM, a cost-effective and accurate model that performs strongly, even alongside its larger counterparts. Take a look here:

Source: OpenAi

Putting it all together

Now, we have selected a speech to text model to transcribe our user’s queries into text, explored functions to handle implementing our various features (such as automated form filling, and inventory management), and put it all together using LangChain. Finally, we display the relevant documents and inventories in a web-app, where scientists can visually see the transcription of their speech, the AI-assistant’s responses, and the documents or inventory being updated in real-time.

Impact in the lab

Using Whisper, LangChain, and OpenAI, we were able to prototype a voice assistant to assist scientists in recording important experimental data keeping inventory records up to date. Typically, a member of the lab’s team is responsible for recording data during a protocol. With the integration of a voice assistant to enable hands-free data capture, a team member that would otherwise be writing down the results of steps or tracking used inventory can now focus on tasks specific to their role, and not data entry. 

Here’s how we plan to measure impact in our testing phase: 

  1. Hours saved using our voice assistant 
  2. Error rates, especially with key terminology and numerical values 
  3. Accuracy of the inventory database through periodic physical inventory audits

Conclusion

Overall, we were able to successfully implement a pilot of a voice assistant for scientists in commercial laboratories. In our analysis phase, we explored the various speech to text models, as this represented a bottleneck in the accuracy of our application overall (the key here is good transcription, especially with key terms, but with low latency). We leveraged function calling to better handle natural language queries from our users, and finally worked with our design team to build a user-friendly interface to interact with the voice assistant. 

This pilot represents one of the many examples of ways commercial labs can innovate and improve their existing processes. With the current AI landscape, there is no doubt that many repetitive processes can not only be automated, but can be done intelligently and in a manner that puts the power of data at the fingertips of scientists. 

Are you interested in discussing how custom, AI-powered applications can increase efficiency in your workflows? Get in touch with us, we’d love to hear from you.

Did this article start to give you some ideas? We’d love to work with you! Get in touch and let’s discover what we can do together.

Get in touch
Button Arrow