In this article, you will discover Hugging Face, the library that’s shaking up the world of Deep Learning.
Hugging Face is both a community and a platform for researching and using large-scale models.
Its open-source library, called 🤗 Transformers, lets you create and use transformer models.
Note: from now on, I’ll be using “Hugging Face” to refer to both the company and its Transformers library.
Hugging Face simplifies the process of accessing and training state-of-the-art models in PyTorch, TensorFlow and JAX, making these models – accessible to everyone.
By using pre-trained models, users can save time, cut costs and even reduce their carbon footprint.
Whether it’s natural language processing (NLP), computer vision, audio processing or even multimodal tasks, this library will meet your needs.
The best part? It bridges the gap between different Deep Learning frameworks, ensuring flexibility in model development and deployment.
What is Hugging Face’s purpose?
The founder of New York-based Hugging Face is pursuing a clear goal: democratize AI.
The company recently launched Inference Endpoints, an AI-as-a-service offering designed to meet the important needs of enterprises.
Inference Endpoint is Hugging Face’s solution to facilitate the deployment of Machine Learning models in production.
Instead of weeks of tedious work to get to production, Inference Endpoints enable users to create their own API in just a few clicks.
The needs addressed by Hugging Face are varied. It includes, among others, companies operating in regulated sectors such as financial services, healthcare and technology.
With this solution, Hugging Face intends to enable over 100,000 users of its platform to go from model experimentation to production – in just a few minutes.
By offering access to cutting-edge models, Hugging Face aims to make Artificial Intelligence accessible to a significant number of businesses.
With this service, companies can now exploit the benefits of Machine Learning models while drastically reducing the number of technical barriers.
Access a range of state-of-the-art models
Hugging Face offers unrivalled access to a wide variety of pre-trained models for Transfer Learning.
Exploring their platform quickly reveals the breadth of their resources.
First and foremost, Hugging Face offers a diverse collection of models to meet a multitude of needs:
- NLP
- Computer Vision
- Audio processing
- Multimodal task solving
Need to solve a specific task in one of these areas? Hugging Face has the right solution for you.
In NLP, for example, it’s easy to find models specialized in translation, text generation, data classification or even natural language understanding.
What’s more, the platform is constantly updated with new models.
This ensures that users always have access to the latest advances in Artificial Intelligence.
As an example, the majority of the most widely used models at the time of writing were updated less than 30 days ago:
This means you can easily stay at the cutting edge of technology.
Last but not least, the library is easy to use – even for non-experts in Machine Learning.
It’s simple to download and use pre-trained models with this library.
Here, I’m using a text-to-speech model in 8 lines of code:
By the way, if your goal is to master Deep Learning - I've prepared the Action plan to Master Neural networks. for you.
7 days of free advice from an Artificial Intelligence engineer to learn how to master neural networks from scratch:
- Plan your training
- Structure your projects
- Develop your Artificial Intelligence algorithms
I have based this program on scientific facts, on approaches proven by researchers, but also on my own techniques, which I have devised as I have gained experience in the field of Deep Learning.
To access it, click here :
Now we can get back to what I was talking about earlier.
from transformers import AutoProcessor, AutoModel
from IPython.display import Audio
#import model and weights
processor = AutoProcessor.from_pretrained("suno/bark-small")
model = AutoModel.from_pretrained("suno/bark-small")
#create text to process
inputs = processor(text=["Hello, my name is Suno. And, uh — and I like pizza. But I also have other interests such as playing tic tac toe."], return_tensors="pt")
#generate audio from text
speech_values = model.generate(**inputs, do_sample=True)
#Play audio (works on notebook only)
sampling_rate = model.generation_config.sample_rate
Audio(speech_values.cpu().numpy().squeeze(), rate=sampling_rate)
This approach enables even inexperienced people to take advantage of these powerful pre-trained models.
You don’t need to be a Deep Learning expert to benefit from it.
Hugging Face offers access to a wide range of pre-trained models. This makes it a valuable resource for anyone wishing to explore the possibilities of Artificial Intelligence.
Switch from TensorFlow to PyTorch in the blink of an eye
Hugging Face offers compatibility between the PyTorch, TensorFlow and JAX Deep Learning frameworks.
This is a major advantage, enabling you to make the most of these libraries in your AI projects.
First and foremost, Hugging Face gives you access to a large number of models stored in a variety of formats.
So, even if a project imposes strict constraints on the type of framework to be used, thanks to this library, it’s possible to use a wide variety of models.
Below, the BERT model is registered in TensorFlow and PyTorch:
Note: BERT is a powerful NLP model for solving a wide range of tasks. You can find an article on how to use it here.
Hugging Face offers flexibility in model import – but that’s not the only aspect.
The library’s flexibility in its compatibility with different Deep Learning frameworks extends to model export.
Indeed, with this library, you can train your own models with PyTorch, then convert them to TensorFlow format with ease.
This flexibility means you can explore the advantages of both frameworks without worrying about technical complications.
The procedure for converting a model from one format to the other can be found on this page of the documentation.
Thanks to its cross-framework compatibility, Hugging Face gives a convenient bridge to gather PyTorch and TensorFlow users in a single library.
Export models easily for efficient production deployment
Hugging Face offers essential export options for production deployment, including ONNX and TorchScript.
ONNX (Open Neural Network Exchange) makes it easy to convert models into a format compatible with various deployment platforms.
This compatibility means that models can be deployed in a variety of environments with little effort.
It eliminates the need for code rewriting or time-consuming model conversions.
Using TorchScript also facilitates production deployment.
TorchScript lets you transform models into native Python code, optimizing them and making their execution more efficient.
This means you can deploy your models faster and make them run more smoothly in your production environment. This property is essential if you need to create real-time applications.
The importance of these export options lies in the fact that they enable you to move from model development to production use with ease.
No need to worry about recreating a model from scratch in another framework, or dealing with performance issues.
Hugging Face makes this crucial transition much easier.
Hugging Face in a nutshell
In its mission to democratize AI, Hugging Face opens up a world of opportunities for businesses and individuals.
Their extensive library of pre-trained models, compatibility with PyTorch, TensorFlow and JAX frameworks, and export options for production deployment, make it much easier to harness the power of this technology.
Whether you’re a beginner or an expert, Hugging Face provides you with considerable resources to explore and exploit this technology smoothly and efficiently.
Today, it’s thanks to Deep Learning that tech leaders can create the most powerful Artificial Intelligences.
If you want to deepen your knowledge in the field, you can access my Action plan to Master Neural networks.
A program of 7 free courses that I’ve prepared to guide you on your journey to master Deep Learning.
If you’re interested, click here:
source: Venture Beats – Hugging Face takes step toward democratizing AI and ML
One last word, if you want to go further and learn about Deep Learning - I've prepared for you the Action plan to Master Neural networks. for you.
7 days of free advice from an Artificial Intelligence engineer to learn how to master neural networks from scratch:
- Plan your training
- Structure your projects
- Develop your Artificial Intelligence algorithms
I have based this program on scientific facts, on approaches proven by researchers, but also on my own techniques, which I have devised as I have gained experience in the field of Deep Learning.
To access it, click here :