Skip to content

Nerlnet is a framework for research and development of distributed machine learning models on IoT

License

Notifications You must be signed in to change notification settings

leondavi/NErlNet

Repository files navigation

Version Contributors Issues Discord
LinkedIn YouTube Hugging Face

NErlNet

Nerlnet is an open-source framework for research and deployment of distributed machine learning algorithms on IoT devices. It provides comprehensive insights into both edge devices that run neural network models and network performance and statistics. Nerlnet can simulate distributed ML clusters on a single or multiple machines and deploy these clusters, with minor changes, on various kinds of IoT devices.

Nerlnet simplifies the setup of a distributed cluster that consists of many models on its edge, communication flow can be fully controlled and monitored, and Nerlnet's Python API allows users to manage and gather data from the distributed cluster throughout the experiment.

Nerlnet library combines the following languages to achieve a stable and efficient distributed ML system framework:
• The communication layer of Nerlnet is based on an Cowboy - an HTTP web server open-source library.
• ML on the edge of the distributed cluster is based on OpenNN library, an open-source project of Cpp Neural Network library.
• Management of Nerlnet cluster - An HTTP server of Flask communicates with Nerlnet's main server to control the cluster's entities.

image image image image

Nerlnet cluster is defined by three configuration files (Json files):

  • Distributed Configuration that defines entities of Nerlnet: Source, Router, Client.
    • A client is a host of workers. A worker is a NN model that can move between phases of train and predict.
    • Source generates data streams that are sent to workers.
    • Router controls the data flow through Nerlnet cluster.

References and libraries:

  • OpenNN, an open-source neural networks library for machine learning.
  • Cowboy an HTTP server for Erlang/OTP.
  • NIFPP C++11 Wrapper for Erlang NIF API.
  • Rebar3, an Erlang tool that makes it easy to create, develop, and release Erlang libraries, applications, and systems in a repeatable manner.
  • Simple Cpp Logger, simple cpp logger headers-only implementation.

Nerlnet is developed by David Leon, Dr. Yehuda Ben-Shimol, and the community of Nerlnet open-source contributors.
Academic researchers can use Nerlnet for free, provided they cite this repository.

Nerlnet Architecture Example:

Nerlnet Architecture

Build and Run Nerlnet:

Recommended cmake version 3.26
Minimum erlang version otp 25 (Tested 24,25,26)
Minimum gcc/g++ version 10.3.0

On every device that hosts Nerlnet cluster entities, do the following steps:

  1. Clone this repository with its subomdules git clone --recurse-submodules <link to this repo> NErlNet
  2. Run sudo ./NerlnetInstall.sh
    2.1 With argument -i script builds and installs Erlang (OTP 25), and CMake from source. (validate that erlang is not installed before executing installation from source)
    2.2 On successful installation, NErlNet directory is accessible
        via the following path: /usr/local/lib/nerlnet-lib
  3. Run ./NerlnetBuild.sh
  4. Test Nerlnet by running: ./tests/NerlnetFullFlowTest.sh
  5. Nerlplanner is a Nerlnet tool to generate required jsons files to setup a distributed system of Nerlnet.
    To use NerlPlanner execute ./NerlPlanner.sh.
    Create json files of distributed configurations, connection map and experiment flow as follows:
  • dc_<any name>.json
  • conn_<any name>.json
  • exp_<any name>.json
  1. Run ./NerlnetRun.sh.
  2. On API-Server device, Start Jupyter NB with ./NerlnetJupyterLaunch.sh and follow ApiServerInstance.help() and examples.

Python API and Jupyter-lab (For Api-Server):

Minimum Python version: 3.8

Communication with Nerlnet is done through a simple python API that can be easily used through Jupyter notebook.
The API allows the user to collect statistics insights of a distributed machine learning network:
Number of messages, throughput, loss, predictions, models performance, etc.

Instructions

  1. Open a jupyter lab environment using ./NerlnetJupyterLaunch.sh -d <experiment_direcotry>
    1.1 Use -h to see the help menu of NerlnetJupyterLaunch.sh script.
    1.2 If --no-venv option is selected then required modules can be read from src_py/requirements.txt.
  2. Read the instructions of importing Api-Server within the generated readme.md file inside <experiment_directory> folder.
  3. Follow the Example Notebook

Distributed ML on The Edge

Distributed ML on the edge - A new evolution step of AI.

720p_Nerlnet.Intro.mp4

Gratitudes

Microsoft Azure

A grant of Azure credits as part of Microsoft’s Azure credits for open source projects program (2024-2025).

Amazon AWS

A grant of AWS credits as part of AWSOpen program for open source projects (2024).