Detects faces into pictures using the Azure Cognitive Services Face API and blurs them to anonymize the information.
Original Picture | Final Result |
---|---|
The test_imgs folder contains some images downloaded from unsplash that can be use to quickly run the notebook.
Navigate to the Azure Portal and login in to your subscription. If you don't have one, you can create your Azure free account here.
Once you have your Azure subscription, create a Face API resource in the Azure portal to get a key and endpoint.
Toggle the checkbox and click on Review and Create to proceed to the last step: click the Create button. When the resource has been deployed, click on Go to resource or navigate to the Resource Group where you deployed it.
To make the notebook work correctly, you'll need to put some information into the config.json file. Open the file.
On your resource on Azure left menu, navigate to the Overview tab. Copy-paste the values listed below as values in the config file:
- Subscription id --> subscription_id
- Resource group name --> resource_group_name ;
- Face API Endpoint --> face_api_endpoint;
- Location --> face_api_location.
On the left side menu, navigate to Keys and Endpoints and click on the button on the right of the Key 1 to copy and paste the key into your config.json as face_api_key value.
You can simply start the notebook locally or on Azure Machine Learning Service.
Few recommendations:
- You might need to download the packages imported in the first cell
- You need to insert the image url. There are test images available here
- Finish implementation of the notebook so that it takes and sends to the API a local image;
- Implement a complete Azure architecture using Azure Python Functions and Azure Storage account.