Data-aware NGINX for Distributed Machine Learning | UnifyID

Speaker: Lef Loannidis, Architect, UnifyID

In this talk Lef Loannidis, Architect at UnifyID will show you how his company leveraged NGINX modules to make a data-aware load-balancer in order to scale our Machine Learning back-end at UnifyID. Our ML back-end is capable of servicing 100+ million users. Workloads require both CPU and GPU intensive computations.

He’ll demonstrate how he’s running machine learning, docker microservices on AWS instances, and how he’s tackled some problems often seen when deploying Machine Learning clusters in production. Issues addressed include but not limited to:

Horizontal scaling and statefullness
– Data-based load balancing and workload distribution.
– Message passing distributed model.
– No shared memory model.

For Machine Learning we are leveraging multiple APIs (TensorFlow, Caffe, Torch) by creating a uniform API for ML microservices. Running unreliable, academic quality code reliably in production. Open source repositories relevant to this talk will be included.

Learn more about NGINX:
Like us on Facebook:
Follow us on Twitter: