Embedding Machine Learning Models in the Oracle Database: Create an ONNX model

 

This post is the first of a three-part series where I’m going to show you how to use pre-configured machine learning models to embed vectors into the Oracle Database. Before I dive into how to load a pre-trained machine learning models with ONNX, it is helpful to know what is an ONNX file? And how do you create one to use with the Oracle Database.

What is an ONNX model?

ONNX is an open-source format designed for machine-learning models. Ensuring cross-platform compatibility and supports major languages and frameworks, facilitating easy and efficient model exchanges.

ONNX stands for Open Neural Network Exchange. It is a popular choice that enables deployment, integration, and exchange of models consistently across platforms that support cloud, web, edge, and mobile experiences on all the major platforms. While the name implies neural networks, the framework also encompasses models that employ other algorithms. Many leading machine learning development frameworks, such as TensorFlow, Pytorch, and Scikit-learn to name a few, offer the capability to convert models into the ONNX format.

ONNX models offer flexibility to export and import models in many languages, such as Python, C++, or C#. Oracle Database 23c (latest pending-release) supports importing these externally trained ONNX files into the Oracle Database and perform in-database scoring, that is, applying a machine learning model to new data, through ONNX Runtime.

ONNX Runtime is an inference engine for ONNX models. With the ONNX Runtime implementation, you can run machine learning models efficiently in ONNX format.

An imported ONNX model is represented as an in-database object, similar to the Oracle Machine Learning (OML) model objects. With the appropriate permissions, ONNX models can be imported for machine learning tasks and used to score models using OML scoring SQL operators.

An example of importing an ONNX model is as follows:

Begin

 DBMS_DATA_MINING.IMPORT_ONNX_MODEL(‘<model>’, ‘<db_model_name>’, JSON(‘{“function”: “embedding”, “embeddingOutput”: “embedding”, “input”: { “input”: [“DATA”} }}’));
End;
/

In this example, the IMPORT_ONNX_MODEL procedure is used to import an ONNX model. The following is a break down on what is needed:

<model> : is a BLOB argument that holds the ONNX representation of the model.

Example: <model> = my_embedding_model.onnx

<db_model_name> : a user-defined name of the model. This is the name that will be used by SQL when called.

Example: <db_modle_name> = doc_model

Obtaining a pre-trained model:

Before you can load a pre-trained model, you must have a pre-trained model. Where can you get a pre-trained model?

To import a pre-trained model, you first must have the Python packaged called Oracle Machine Learning Utilities (omlutils). Sadly, at the time of this writing, the only way to get this package is via Oracle as a wheel package. Once you have the omlutils package, it needs to be uploaded to the server where the Oracle Database is running.

Installing OMLUTILS

Hopefully, when Oracle Database 23c is fully released the omlutils binaries will be included in the Oracle Database Home. Like what Oracle has done with Python 3.12. The next couple of steps will give a overview of how to install the omlutilis into the local Python environment.

1. Verify you have Python 3.12 installed.

$ export ORACLE_HOME_23c=/opt/oracle/product/23c/dbhome_1
$ cd $ORACLE_HOME_23c/python/bin
$ python -V
$ export PATH=$ORACLE_HOME_23c/python/bin:$PATH
$ python -v

In both of the “python -V” commands, you should be returned

Python 3.12.0

2. Create an ONNX directory

$ cd ~
$ mkdir onnx

3. Unzip the omlutils.zip file in the onnx directory

$ cd ~/onnx
$ unzip ./omlutils.zip -d .

4. After the omlutils have been unzipped, install the package using pip

$ cd ~/onnx
$ python -m pip install -r requirements.txt
$ python -m pip install omlutils-0.13.0-cp312-cp312-linux_x86_64.whl

Included Pre-Trained Models

With the omlutils package installed it comes with seventeen different pre-trained models that can be used for embedding vectors. These pre-trained models are ready to use immediately and can be seen from Python using the show_preconfigured() function.

1.	‘sentence-transformers/all-mpnet-base-v2', 
2. 'sentence-transformers/all-MiniLM-L6-v2',
3. 'sentence-transformers/multi-qa-MiniLM-L6-cos-v1',
4. 'ProsusAI/finbert',
5. 'medicalai/ClinicalBERT',
6. 'sentence-transformers/distiluse-base-multilingual-cased-v2',
7. 'sentence-transformers/all-MiniLM-L12-v2',
8. 'BAAI/bge-small-en-v1.5',
9. 'BAAI/bge-base-en-v1.5',
10. 'taylorAI/bge-micro-v2',
11. 'intfloat/e5-small-v2',
12. 'intfloat/e5-base-v2',
13. 'prajjwal1/bert-tiny',
14. 'thenlper/gte-base',
15. 'thenlper/gte-small',
16. 'TaylorAI/gte-tiny',
17. 'infgrad/stella-base-en-v2’

The steps to see these pre-configured models is as follows from an interactive Python prompt:

$ python

>>> from omlutils import EmbeddingModel, EmbeddingModelConfig
>>> em = EmbeddingModel(model_name="sentence-transformers/all-MiniLM-L6-v2”)
>>> emc = EmbeddingModelConfig()
>>> emc.show_preconfigured()
>>> exit()
$

To convert one of these pre-configured models into ONNX file, the steps are as follows:

Again, you are using an interactive Python prompt here.

For model recalibrations, these steps can be put into a python script that can run on a regular basis.

$ cd ~/onnx
$ python
>>> from omlutils import EmbeddingModel, EmbeddingModelConfig
>>> em = EmbeddingModel(model_name="sentence-transformers/all-MiniLM-L6-v2”)
>>> em.export2file("all-MiniLM-L6-v2",output_dir=".”)
>>> exit()
$

When you look in the ~/onnx directory, you will see a ONNX file the matches the name of the pre-configured model. In this example the file name is all-MiniLM-L6-v2.onnx.

Summary

To round all this out. In the upcoming release of the Oracle Database 23c, you will have the ability to take an open-source machine learning model and embed the model to the database. Once the model is embedded in the database, you can then use the model to create vectors on existing data or update vectors with these models; enabling a vector database in a secure environment while powering private Retrieval Augmentation Generation (RAG) in diverse envrionments.

Please follow and like:

Enquire now

Give us a call or fill in the form below and we will contact you. We endeavor to answer all inquiries within 24 hours on business days.