The training, deployment, and monitoring machine learning (ML) models is a critical task in virtually all production ML use cases. Imposing additional challenges beyond textbook ML scenarios and traditional software systems, real-world ML systems and the models are often much more complex, and dependent on an infrastructure stack and various components to run.
Therefore, any wrong model management decisions can lead to an ML system’s poor performance and can result in high maintenance costs. One way to efficiently solve common challenges involved in testing and deploying ML systems is to use a centralized framework, capable of auto-scaling dynamically in response to changes in your workload.
Experts agree that companies should avoid building their own machine learning infrastructure. By leveraging open source frameworks and commercial platforms that are now widely available, you can create models that provide differentiated value and continuously monitor and combat deviations in the model quality, such as data drift. This post will present the top six open-source frameworks for machine learning model hosting that make training and production of ML solutions easier and faster.
1. BentoML
BentoML is an open-source platform for high-performance machine learning model serving. This end-to-end solution for model serving helps data science teams build production-ready API endpoints for ML models with just a few code lines, making it easy to serve and deploy machine learning models in the cloud.
BentoML comes with a high-performance API model server with adaptive micro-batching support. It provides model management and model deployment functionality, giving an end-to-end model serving workflow, with DevOps best practices baked in, supporting all major machine learning training frameworks, including Tensorflow, Keras, PyTorch, XGBoost, scikit-learn, etc. It also supports Docker, Kubernetes, Kubeflow, Knative, AWS Lambda, SageMaker, Azure ML, GCP, and more.
2. Streamlit
Streamlit is a flexible, open-source app framework for machine learning engineers working with Python to create custom-built applications to interact with the data in their models. With a few code lines, a machine learning engineer can quickly build tools without the need to mess with HTTP requests, JavaScript, HTML, etc. All that is needed is an editor and browser.
Streamlit watches on every save for changes, and updates the app live while coding. Code runs from top to bottom, always in a clean state, and with no callbacks required. You can easily install Streamlit via pip in your terminal and then start writing your web app in Python.
3. RAPIDS
RAPIDS is a suite of software libraries that gives you 360-degree freedom to execute end-to-end data science and analytics pipelines and increase machine learning model accuracy by iterating models faster and deploying them more frequently. Built on CUDA-X AI, this customizable, extensible, interoperable, and open-source software is supported by NVIDIA and built on Apache Arrow.
RAPIDS focuses on everyday data preparation tasks for analytics and data science. It includes a familiar DataFrame API, which integrates various machine learning algorithms to accelerate end-to-end pipelines without paying typical serialization costs. RAPIDS also support multi-node, multi-GPU deployments, enabling vastly accelerated processing and training on larger dataset sizes.
4. Acumos AI
Acumos AI is an open-source framework that enhances the development, training, and deployment of ML models. It standardizes the infrastructure stack and components required to run a general ML environment out of the box, making AI apps easy to build, share, and deploy.
The platform aims to empower data scientists to publish more adaptive AI models and to shield them from the task of developing fully integrated solutions for customers. Acumos enables developers to transform software development from a code writing and editing exercise into a classroom-like code training process in which models are trained and graded on their ability to analyze data sets successfully.
5. Ray
Ray is a high-performance distributed execution framework focused on applications for large-scale machine learning and reinforcement learning. It can achieve scalability and fault tolerance by abstracting the control state of the system in a global control store, keeping all other components stateless. It uses a distributed object store shared-memory to handle large data via shared memory efficiently.
It also uses a hierarchical planning architecture from the bottom up to achieve low latency and high-throughput scheduling. It has a lightweight API based on dynamic task graphs and actors to express a wide spectrum of applications flexibly. Ray comes with libraries that accelerate deep learning and reinforcement learning development: Ray Tune80 (Hyperparameter Optimization Framework) and Ray RLlib81 (a Scalable Reinforcement Learning Library).
6. Turi Create
Turi Create is an open-source toolset for creating Core machine learning models for tasks such as image classification, object detection, style transfers, recommendations, image similarity, or activity classification. It simplifies the development of custom machine learning models. This easy-to-use, fast, and scalable framework focuses on tasks instead of algorithms. It provides built-in visualizations to explore your large datasets in a variety of formats such as text, images, audio, video, and sensor data. It helps export ready-to-deploy models to Core ML for iOS, macOS, watchOS, and tvOS apps.